Kubernetes
Starting at version 1.5.0, Otoroshi provides a native Kubernetes support. Multiple otoroshi jobs (that are actually kubernetes controllers) are provided in order to
- sync kubernetes secrets of type
kubernetes.io/tlsto otoroshi certificates - act as a standard ingress controller (supporting
Ingressobjects) - provide Custom Resource Definitions (CRDs) to manage Otoroshi entities from Kubernetes and act as an ingress controller with its own resources
Installing otoroshi on your kubernetes cluster
You need to have cluster admin privileges to install otoroshi and its service account, role mapping and CRDs on a kubernetes cluster. We also advise you to create a dedicated namespace (you can name it otoroshi for example) to install otoroshi
If you want to deploy otoroshi into your kubernetes cluster, you can download the deployment descriptors from https://github.com/MAIF/otoroshi/tree/master/kubernetes and use kustomize to create your own overlay.
You can also create a kustomization.yaml file with a remote base
bases:
- github.com/MAIF/otoroshi/kubernetes/kustomize/overlays/simple/?ref=v17.14.0-dev
Then deploy it with kubectl apply -k ./overlays/myoverlay.
You can also use Helm to deploy a simple otoroshi cluster on your kubernetes cluster
helm repo add otoroshi https://maif.github.io/otoroshi/helm
helm install my-otoroshi otoroshi/otoroshi
Below, you will find example of deployment. Do not hesitate to adapt them to your needs. Those descriptors have value placeholders that you will need to replace with actual values like
env:
- name: APP_STORAGE_ROOT
value: otoroshi
- name: APP_DOMAIN
value: ${domain}
you will have to edit it to make it look like
env:
- name: APP_STORAGE_ROOT
value: otoroshi
- name: APP_DOMAIN
value: 'apis.my.domain'
if you don't want to use placeholders and environment variables, you can create a secret containing the configuration file of otoroshi
apiVersion: v1
kind: Secret
metadata:
name: otoroshi-config
type: Opaque
stringData:
oto.conf: >
include "application.conf"
otoroshi {
storage = "redis"
domain = "apis.my.domain"
}
and mount it in the otoroshi container
apiVersion: apps/v1
kind: Deployment
metadata:
name: otoroshi-deployment
spec:
selector:
matchLabels:
run: otoroshi-deployment
template:
metadata:
labels:
run: otoroshi-deployment
spec:
serviceAccountName: otoroshi-admin-user
terminationGracePeriodSeconds: 60
hostNetwork: false
containers:
- image: maif/otoroshi:17.14.0-dev
imagePullPolicy: IfNotPresent
name: otoroshi
args: ['-Dconfig.file=/usr/app/otoroshi/conf/oto.conf']
ports:
- containerPort: 8080
name: "http"
protocol: TCP
- containerPort: 8443
name: "https"
protocol: TCP
volumeMounts:
- name: otoroshi-config
mountPath: "/usr/app/otoroshi/conf"
readOnly: true
volumes:
- name: otoroshi-config
secret:
secretName: otoroshi-config
...
You can also create several secrets for each placeholder, mount them to the otoroshi container then use their file path as value
env:
- name: APP_STORAGE_ROOT
value: otoroshi
- name: APP_DOMAIN
value: 'file:///the/path/of/the/secret/file'
you can use the same trick in the config. file itself
Note on bare metal kubernetes cluster installation
Bare metal kubernetes clusters don't come with support for external loadbalancers (service of type LoadBalancer). So you will have to provide this feature in order to route external TCP traffic to Otoroshi containers running inside the kubernetes cluster. You can use projects like MetalLB that provide software LoadBalancer services to bare metal clusters or you can use and customize examples below.
We don't recommand running Otoroshi behind an existing ingress controller (or something like that) as you will not be able to use features like TCP proxying, TLS, mTLS, etc. Also, this additional layer of reverse proxy will increase call latencies.
Common manifests
the following manifests are always needed. They create otoroshi CRDs, tokens, role, etc. Redis deployment is not mandatory, it's just an example. You can use your own existing setup.
When updating an existing otoroshi cluster, new kubernetes entities can be expected to be available by the kubernetes plugins. So it could be helpful to check if that's the case before deploying the new version. If you want to automate this process, you can check the following section
- rbac.yaml
- crds.yaml
- redis.yaml
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: otoroshi-admin-user
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: otoroshi-admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: otoroshi-admin-user
subjects:
- kind: ServiceAccount
name: otoroshi-admin-user
namespace: $namespace
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: otoroshi-admin-user
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
- configmaps
- deployments
- namespaces
- pods
verbs:
- get
- list
- watch
- apiGroups:
- "apps"
resources:
- deployments
verbs:
- get
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- secrets
- configmaps
verbs:
- update
- create
- delete
- apiGroups:
- extensions
resources:
- ingresses
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
- mutatingwebhookconfigurations
verbs:
- get
- update
- patch
- apiGroups:
- proxy.otoroshi.io
resources:
- routes
- backends
- route-compositions
- service-descriptors
- tcp-services
- error-templates
- apikeys
- certificates
- jwt-verifiers
- auth-modules
- admin-sessions
- admins
- auth-module-users
- service-groups
- organizations
- tenants
- teams
- data-exporters
- scripts
- wasm-plugins
- global-configs
- green-scores
- coraza-configs
- http-listeners
- plugins
verbs:
- get
- list
- watch
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "routes.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "Route"
plural: "routes"
singular: "route"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "backends.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "Backend"
plural: "backends"
singular: "backend"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "route-compositions.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "RouteComposition"
plural: "route-compositions"
singular: "route-composition"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "service-descriptors.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "ServiceDescriptor"
plural: "service-descriptors"
singular: "service-descriptor"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "tcp-services.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "TcpService"
plural: "tcp-services"
singular: "tcp-service"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "error-templates.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "ErrorTemplate"
plural: "error-templates"
singular: "error-templates"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "apikeys.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "Apikey"
plural: "apikeys"
singular: "apikey"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "certificates.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "Certificate"
plural: "certificates"
singular: "certificate"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "jwt-verifiers.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "JwtVerifier"
plural: "jwt-verifiers"
singular: "jwt-verifier"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "auth-modules.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "AuthModule"
plural: "auth-modules"
singular: "auth-module"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "admin-sessions.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "AdminSession"
plural: "admin-sessions"
singular: "admin-session"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "admins.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "SimpleAdminUser"
plural: "admins"
singular: "admins"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "auth-module-users.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "AuthModuleUser"
plural: "auth-module-users"
singular: "auth-module-user"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "service-groups.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "ServiceGroup"
plural: "service-groups"
singular: "service-group"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "organizations.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "Organization"
plural: "organizations"
singular: "organization"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "tenants.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "Tenant"
plural: "tenants"
singular: "tenant"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "teams.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "Team"
plural: "teams"
singular: "team"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "data-exporters.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "DataExporter"
plural: "data-exporters"
singular: "data-exporter"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "scripts.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "Script"
plural: "scripts"
singular: "script"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "wasm-plugins.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "WasmPlugin"
plural: "wasm-plugins"
singular: "wasm-plugin"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "global-configs.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "GlobalConfig"
plural: "global-configs"
singular: "global-config"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "green-scores.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "GreenScore"
plural: "green-scores"
singular: "green-score"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "coraza-configs.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "CorazaConfig"
plural: "coraza-configs"
singular: "coraza-config"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "http-listeners.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "HttpListener"
plural: "http-listeners"
singular: "http-listener"
scope: "Namespaced"
versions:
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "simple-admin-users.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "SimpleAdminUsers"
plural: "simple-admin-users"
singular: "simple-admin-user"
scope: "Namespaced"
versions:
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
---
apiVersion: "apiextensions.k8s.io/v1"
kind: "CustomResourceDefinition"
metadata:
name: "plugins.proxy.otoroshi.io"
spec:
group: "proxy.otoroshi.io"
names:
kind: "Plugin"
plural: "plugins"
singular: "plugin"
scope: "Namespaced"
versions:
- name: "v1alpha1"
served: false
storage: false
deprecated: true
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
- name: "v1"
served: true
storage: true
deprecated: false
schema:
openAPIV3Schema:
x-kubernetes-preserve-unknown-fields: true
type: "object"
apiVersion: v1
kind: Service
metadata:
name: redis-leader-service
spec:
ports:
- port: 6379
name: redis
selector:
run: redis-leader-deployment
---
apiVersion: v1
kind: Service
metadata:
name: redis-follower-service
spec:
ports:
- port: 6379
name: redis
selector:
run: redis-follower-deployment
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-leader-deployment
spec:
selector:
matchLabels:
run: redis-leader-deployment
serviceName: redis-leader-service
replicas: 1
template:
metadata:
labels:
run: redis-leader-deployment
spec:
containers:
- name: redis-leader-container
image: redis
imagePullPolicy: Always
command: ["redis-server", "--appendonly", "yes"]
ports:
- containerPort: 6379
name: redis
volumeMounts:
- name: redis-leader-storage
mountPath: /data
readOnly: false
readinessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 15
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 20
periodSeconds: 3
volumeClaimTemplates:
- metadata:
name: redis-leader-storage
labels:
name: redis-leader-storage
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Mi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-follower-deployment
spec:
selector:
matchLabels:
run: redis-follower-deployment
serviceName: redis-follower-service
replicas: 1
template:
metadata:
labels:
run: redis-follower-deployment
spec:
containers:
- name: redis-follower-container
image: redis
imagePullPolicy: Always
command: ["redis-server", "--appendonly", "yes", "--slaveof", "redis-leader-service", "6379"]
ports:
- containerPort: 6379
name: redis
volumeMounts:
- name: redis-follower-storage
mountPath: /data
readOnly: false
readinessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 15
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "redis-cli -h $(hostname) ping"
initialDelaySeconds: 20
periodSeconds: 3
volumeClaimTemplates:
- metadata:
name: redis-follower-storage
labels:
name: redis-follower-storage
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Mi
Updating rbac and crds when upgrading Otoroshi using otoroshictl
Updating rbac and crds definition for an Otoroshi upgrade can be tedious and is quite a manual process. But it can be largely simplified by using otoroshictl from our friends at Cloud APIM.
otoroshictl has some commands to automate the creation for rbac.yaml and crds.yaml for your otoroshi deployments. Using it is quite handy as it will also generate the descriptor for any custom extension your using on your Otoroshi instance.
First add your otoroshi instance to otoroshictl (it may be already done, in that case, just use it with otoroshictl config use new-cluster).
$ otoroshictl config add new-cluster \
--current \
--hostname otoroshi.foo.bar \
--port 8443 \
--tls \
--client-id xxx \
--client-secret xxxxx
then you can generate the crds.yaml file using the command
$ otoroshictl resources crds > crds.yaml
and you can generate the rbac.yaml file using the command
$ otoroshictl resources rbac --namespace mynamespace > rbac.yaml
you can also directly apply it on your kubernetes cluster
$ otoroshictl resources rbac --namespace mynamespace | kubectl apply -f -
$ otoroshictl resources crds | kubectl apply -f -
Deploy a simple otoroshi instanciation on a cloud provider managed kubernetes cluster
Here we have 2 replicas connected to the same redis instance. Nothing fancy. We use a service of type LoadBalancer to expose otoroshi to the rest of the world. You have to setup your DNS to bind otoroshi domain names to the LoadBalancer external CNAME (see the example below)
- deployment.yaml
- dns.example
apiVersion: apps/v1
kind: Deployment
metadata:
name: otoroshi-deployment
spec:
selector:
matchLabels:
run: otoroshi-deployment
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
run: otoroshi-deployment
spec:
serviceAccountName: otoroshi-admin-user
terminationGracePeriodSeconds: 60
hostNetwork: false
containers:
- image: maif/otoroshi:17.14.0-jdk11
imagePullPolicy: IfNotPresent
name: otoroshi
ports:
- containerPort: 8080
name: "http"
protocol: TCP
- containerPort: 8443
name: "https"
protocol: TCP
env:
- name: APP_STORAGE_ROOT
value: otoroshi
- name: OTOROSHI_INITIAL_ADMIN_PASSWORD
value: ${password}
- name: APP_DOMAIN
value: ${domain}
- name: APP_STORAGE
value: lettuce
- name: REDIS_URL
value: ${redisUrl}
# value: redis://redis-leader-service:6379/0
- name: ADMIN_API_CLIENT_ID
value: ${clientId}
- name: ADMIN_API_CLIENT_SECRET
value: ${clientSecret}
- name: ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN
value: otoroshi-api-service.${namespace}.svc.cluster.local
- name: OTOROSHI_SECRET
value: ${otoroshiSecret}
- name: HEALTH_LIMIT
value: "5000"
- name: SSL_OUTSIDE_CLIENT_AUTH
value: Want
- name: HTTPS_WANT_CLIENT_AUTH
value: "true"
- name: OTOROSHI_EXPOSED_PORTS_HTTP
value: "80"
- name: OTOROSHI_EXPOSED_PORTS_HTTPS
value: "443"
- name: OTOROSHI_INITIAL_CUSTOMIZATION
value: >
{
"config":{
"tlsSettings": {
"defaultDomain": "www.${domain}",
"randomIfNotFound": false
},
"scripts":{
"enabled":true,
"sinkRefs":[
"cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookCRDValidator",
"cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookSidecarInjector"
],
"sinkConfig": {},
"jobRefs":[
"cp:otoroshi.plugins.jobs.kubernetes.KubernetesOtoroshiCRDsControllerJob"
],
"jobConfig":{
"KubernetesConfig": {
"trust": false,
"namespaces": [
"*"
],
"labels": {},
"namespacesLabels": {},
"ingressClasses": [
"otoroshi"
],
"defaultGroup": "default",
"ingresses": false,
"crds": true,
"coreDnsIntegration": false,
"coreDnsIntegrationDryRun": false,
"kubeLeader": false,
"restartDependantDeployments": false,
"watch": false,
"syncDaikokuApikeysOnly": false,
"kubeSystemNamespace": "kube-system",
"coreDnsConfigMapName": "coredns",
"coreDnsDeploymentName": "coredns",
"corednsPort": 53,
"otoroshiServiceName": "otoroshi-service",
"otoroshiNamespace": "${namespace}",
"clusterDomain": "cluster.local",
"syncIntervalSeconds": 60,
"coreDnsEnv": null,
"watchTimeoutSeconds": 60,
"watchGracePeriodSeconds": 5,
"mutatingWebhookName": "otoroshi-admission-webhook-injector",
"validatingWebhookName": "otoroshi-admission-webhook-validation",
"templates": {
"service-group": {},
"service-descriptor": {},
"apikeys": {},
"global-config": {},
"jwt-verifier": {},
"tcp-service": {},
"certificate": {},
"auth-module": {},
"script": {},
"organizations": {},
"teams": {},
"webhooks": {
"flags": {
"requestCert": true,
"originCheck": true,
"tokensCheck": true,
"displayEnv": false,
"tlsTrace": false
}
}
}
}
}
}
}
}
- name: JAVA_OPTS
value: '-Xms2g -Xmx4g -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0'
readinessProbe:
httpGet:
path: /ready
port: 8080
failureThreshold: 1
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
httpGet:
path: /live
port: 8080
failureThreshold: 3
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
resources:
# requests:
# cpu: "100m"
# memory: "50Mi"
# limits:
# cpu: "4G"
# memory: "4Gi"
---
apiVersion: v1
kind: Service
metadata:
name: otoroshi-service
spec:
selector:
run: otoroshi-deployment
ports:
- port: 8080
name: "http"
targetPort: "http"
- port: 8443
name: "https"
targetPort: "https"
---
apiVersion: v1
kind: Service
metadata:
name: otoroshi-external-service
spec:
type: LoadBalancer
selector:
run: otoroshi-deployment
ports:
- port: 80
name: "http"
targetPort: "http"
- port: 443
name: "https"
targetPort: "https"
---
apiVersion: proxy.otoroshi.io/v1
kind: Certificate
metadata:
name: otoroshi-service-certificate
spec:
description: certificate for otoroshi-service
autoRenew: true
csr:
issuer: CN=Otoroshi Root
hosts:
- otoroshi-service
- otoroshi-service.${namespace}.svc.cluster.local
- otoroshi-api-service.${namespace}.svc.cluster.local
- otoroshi.${domain}
- otoroshi-api.${domain}
- privateapps.${domain}
key:
algo: rsa
size: 2048
subject: uid=otoroshi-service-cert, O=Otoroshi
client: false
ca: false
duration: 31536000000
signatureAlg: SHA256WithRSAEncryption
digestAlg: SHA-256
otoroshi.your.otoroshi.domain IN CNAME generated.cname.of.your.cluster.loadbalancer
otoroshi-api.your.otoroshi.domain IN CNAME generated.cname.of.your.cluster.loadbalancer
privateapps.your.otoroshi.domain IN CNAME generated.cname.of.your.cluster.loadbalancer
api1.another.domain IN CNAME generated.cname.of.your.cluster.loadbalancer
api2.another.domain IN CNAME generated.cname.of.your.cluster.loadbalancer
*.api.the.api.domain IN CNAME generated.cname.of.your.cluster.loadbalancer
Deploy a simple otoroshi instanciation on a bare metal kubernetes cluster
Here we have 2 replicas connected to the same redis instance. Nothing fancy. The otoroshi instance are exposed as nodePort so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below).
- deployment.yaml
- haproxy.example
- nginx.example
- dns.example
apiVersion: apps/v1
kind: Deployment
metadata:
name: otoroshi-deployment
spec:
selector:
matchLabels:
run: otoroshi-deployment
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
run: otoroshi-deployment
spec:
serviceAccountName: otoroshi-admin-user
terminationGracePeriodSeconds: 60
hostNetwork: false
containers:
- image: maif/otoroshi:17.14.0-jdk11
imagePullPolicy: IfNotPresent
name: otoroshi
ports:
- containerPort: 8080
name: "http"
protocol: TCP
- containerPort: 8443
name: "https"
protocol: TCP
env:
- name: APP_STORAGE_ROOT
value: otoroshi
- name: OTOROSHI_INITIAL_ADMIN_PASSWORD
value: ${password}
- name: APP_DOMAIN
value: ${domain}
- name: APP_STORAGE
value: lettuce
- name: REDIS_URL
value: ${redisUrl}
# value: redis://redis-leader-service:6379/0
- name: ADMIN_API_CLIENT_ID
value: ${clientId}
- name: ADMIN_API_CLIENT_SECRET
value: ${clientSecret}
- name: ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN
value: otoroshi-api-service.${namespace}.svc.cluster.local
- name: OTOROSHI_SECRET
value: ${otoroshiSecret}
- name: HEALTH_LIMIT
value: "5000"
- name: SSL_OUTSIDE_CLIENT_AUTH
value: Want
- name: HTTPS_WANT_CLIENT_AUTH
value: "true"
- name: OTOROSHI_EXPOSED_PORTS_HTTP
value: "80"
- name: OTOROSHI_EXPOSED_PORTS_HTTPS
value: "443"
- name: OTOROSHI_INITIAL_CUSTOMIZATION
value: >
{
"config":{
"tlsSettings": {
"defaultDomain": "www.${domain}",
"randomIfNotFound": false
},
"scripts":{
"enabled":true,
"sinkRefs":[
"cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookCRDValidator",
"cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookSidecarInjector"
],
"sinkConfig": {},
"jobRefs":[
"cp:otoroshi.plugins.jobs.kubernetes.KubernetesOtoroshiCRDsControllerJob"
],
"jobConfig":{
"KubernetesConfig": {
"trust": false,
"namespaces": [
"*"
],
"labels": {},
"namespacesLabels": {},
"ingressClasses": [
"otoroshi"
],
"defaultGroup": "default",
"ingresses": false,
"crds": true,
"coreDnsIntegration": false,
"coreDnsIntegrationDryRun": false,
"kubeLeader": false,
"restartDependantDeployments": false,
"watch": false,
"syncDaikokuApikeysOnly": false,
"kubeSystemNamespace": "kube-system",
"coreDnsConfigMapName": "coredns",
"coreDnsDeploymentName": "coredns",
"corednsPort": 53,
"otoroshiServiceName": "otoroshi-service",
"otoroshiNamespace": "${namespace}",
"clusterDomain": "cluster.local",
"syncIntervalSeconds": 60,
"coreDnsEnv": null,
"watchTimeoutSeconds": 60,
"watchGracePeriodSeconds": 5,
"mutatingWebhookName": "otoroshi-admission-webhook-injector",
"validatingWebhookName": "otoroshi-admission-webhook-validation",
"templates": {
"service-group": {},
"service-descriptor": {},
"apikeys": {},
"global-config": {},
"jwt-verifier": {},
"tcp-service": {},
"certificate": {},
"auth-module": {},
"script": {},
"organizations": {},
"teams": {},
"webhooks": {
"flags": {
"requestCert": true,
"originCheck": true,
"tokensCheck": true,
"displayEnv": false,
"tlsTrace": false
}
}
}
}
}
}
}
}
- name: JAVA_OPTS
value: '-Xms2g -Xmx4g -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0'
readinessProbe:
httpGet:
path: /ready
port: 8080
failureThreshold: 1
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
httpGet:
path: /live
port: 8080
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
resources:
# requests:
# cpu: "100m"
# memory: "50Mi"
# limits:
# cpu: "4G"
# memory: "4Gi"
---
apiVersion: v1
kind: Service
metadata:
name: otoroshi-service
spec:
selector:
run: otoroshi-deployment
ports:
- port: 8080
name: "http"
targetPort: "http"
- port: 8443
name: "https"
targetPort: "https"
---
apiVersion: v1
kind: Service
metadata:
name: otoroshi-external-service
spec:
selector:
run: otoroshi-deployment
ports:
- port: 80
name: "http"
targetPort: "http"
nodePort: 31080
- port: 443
name: "https"
targetPort: "https"
nodePort: 31443
---
apiVersion: proxy.otoroshi.io/v1
kind: Certificate
metadata:
name: otoroshi-service-certificate
spec:
description: certificate for otoroshi-service
autoRenew: true
csr:
issuer: CN=Otoroshi Root
hosts:
- otoroshi-service
- otoroshi-service.${namespace}.svc.cluster.local
- otoroshi-api-service.${namespace}.svc.cluster.local
- otoroshi.${domain}
- otoroshi-api.${domain}
- privateapps.${domain}
key:
algo: rsa
size: 2048
subject: uid=otoroshi-service-cert, O=Otoroshi
client: false
ca: false
duration: 31536000000
signatureAlg: SHA256WithRSAEncryption
digestAlg: SHA-256
frontend front_nodes_http
bind *:80
mode tcp
default_backend back_http_nodes
timeout client 1m
frontend front_nodes_https
bind *:443
mode tcp
default_backend back_https_nodes
timeout client 1m
backend back_http_nodes
mode tcp
balance roundrobin
server kubernetes-node1 10.2.2.40:31080
server kubernetes-node2 10.2.2.41:31080
server kubernetes-node3 10.2.2.42:31080
timeout connect 10s
timeout server 1m
backend back_https_nodes
mode tcp
balance roundrobin
server kubernetes-node1 10.2.2.40:31443
server kubernetes-node2 10.2.2.41:31443
server kubernetes-node3 10.2.2.42:31443
timeout connect 10s
timeout server 1m
stream {
upstream back_http_nodes {
zone back_http_nodes 64k;
server 10.2.2.40:31080 max_fails=1;
server 10.2.2.41:31080 max_fails=1;
server 10.2.2.42:31080 max_fails=1;
}
upstream back_https_nodes {
zone back_https_nodes 64k;
server 10.2.2.40:31443 max_fails=1;
server 10.2.2.41:31443 max_fails=1;
server 10.2.2.42:31443 max_fails=1;
}
server {
listen 80;
proxy_pass back_http_nodes;
health_check;
}
server {
listen 443;
proxy_pass back_https_nodes;
health_check;
}
}
# if your loadbalancer is at ip address 10.2.2.50
otoroshi.your.otoroshi.domain IN A 10.2.2.50
otoroshi-api.your.otoroshi.domain IN A 10.2.2.50
privateapps.your.otoroshi.domain IN A 10.2.2.50
api1.another.domain IN A 10.2.2.50
api2.another.domain IN A 10.2.2.50
*.api.the.api.domain IN A 10.2.2.50
Deploy a simple otoroshi instanciation on a bare metal kubernetes cluster using a DaemonSet
Here we have one otoroshi instance on each kubernetes node (with the otoroshi-kind: instance label) with redis persistance. The otoroshi instances are exposed as hostPort so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below).
- deployment.yaml
- haproxy.example
- nginx.example
- dns.example
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: otoroshi-deployment
spec:
selector:
matchLabels:
run: otoroshi-deployment
template:
metadata:
labels:
run: otoroshi-deployment
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: otoroshi-kind
operator: In
values:
- instance
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: otoroshi-admin-user
terminationGracePeriodSeconds: 60
restartPolicy: Always
hostNetwork: false
containers:
- image: maif/otoroshi:17.14.0-jdk11
imagePullPolicy: IfNotPresent
name: otoroshi
ports:
- containerPort: 8080
hostPort: 41080
name: "http"
protocol: TCP
- containerPort: 8443
hostPort: 41443
name: "https"
protocol: TCP
env:
- name: APP_STORAGE_ROOT
value: otoroshi
- name: OTOROSHI_INITIAL_ADMIN_PASSWORD
value: ${password}
- name: APP_DOMAIN
value: ${domain}
- name: APP_STORAGE
value: lettuce
- name: REDIS_URL
value: ${redisUrl}
# value: redis://redis-leader-service:6379/0
- name: ADMIN_API_CLIENT_ID
value: ${clientId}
- name: ADMIN_API_CLIENT_SECRET
value: ${clientSecret}
- name: ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN
value: otoroshi-api-service.${namespace}.svc.cluster.local
- name: OTOROSHI_SECRET
value: ${otoroshiSecret}
- name: HEALTH_LIMIT
value: "5000"
- name: SSL_OUTSIDE_CLIENT_AUTH
value: Want
- name: HTTPS_WANT_CLIENT_AUTH
value: "true"
- name: OTOROSHI_EXPOSED_PORTS_HTTP
value: "80"
- name: OTOROSHI_EXPOSED_PORTS_HTTPS
value: "443"
- name: OTOROSHI_INITIAL_CUSTOMIZATION
value: >
{
"config":{
"tlsSettings": {
"defaultDomain": "www.${domain}",
"randomIfNotFound": false
},
"scripts":{
"enabled":true,
"sinkRefs":[
"cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookCRDValidator",
"cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookSidecarInjector"
],
"sinkConfig": {},
"jobRefs":[
"cp:otoroshi.plugins.jobs.kubernetes.KubernetesOtoroshiCRDsControllerJob"
],
"jobConfig":{
"KubernetesConfig": {
"trust": false,
"namespaces": [
"*"
],
"labels": {},
"namespacesLabels": {},
"ingressClasses": [
"otoroshi"
],
"defaultGroup": "default",
"ingresses": false,
"crds": true,
"coreDnsIntegration": false,
"coreDnsIntegrationDryRun": false,
"kubeLeader": false,
"restartDependantDeployments": false,
"watch": false,
"syncDaikokuApikeysOnly": false,
"kubeSystemNamespace": "kube-system",
"coreDnsConfigMapName": "coredns",
"coreDnsDeploymentName": "coredns",
"corednsPort": 53,
"otoroshiServiceName": "otoroshi-service",
"otoroshiNamespace": "${namespace}",
"clusterDomain": "cluster.local",
"syncIntervalSeconds": 60,
"coreDnsEnv": null,
"watchTimeoutSeconds": 60,
"watchGracePeriodSeconds": 5,
"mutatingWebhookName": "otoroshi-admission-webhook-injector",
"validatingWebhookName": "otoroshi-admission-webhook-validation",
"templates": {
"service-group": {},
"service-descriptor": {},
"apikeys": {},
"global-config": {},
"jwt-verifier": {},
"tcp-service": {},
"certificate": {},
"auth-module": {},
"script": {},
"organizations": {},
"teams": {},
"webhooks": {
"flags": {
"requestCert": true,
"originCheck": true,
"tokensCheck": true,
"displayEnv": false,
"tlsTrace": false
}
}
}
}
}
}
}
}
- name: JAVA_OPTS
value: '-Xms2g -Xmx4g -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0'
readinessProbe:
httpGet:
path: /ready
port: 8080
failureThreshold: 1
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
httpGet:
path: /live
port: 8080
failureThreshold: 3
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
resources:
# requests:
# cpu: "100m"
# memory: "50Mi"
# limits:
# cpu: "4G"
# memory: "4Gi"
---
apiVersion: v1
kind: Service
metadata:
name: otoroshi-service
spec:
selector:
run: otoroshi-deployment
ports:
- port: 8080
name: "http"
targetPort: "http"
- port: 8443
name: "https"
targetPort: "https"
---
apiVersion: proxy.otoroshi.io/v1
kind: Certificate
metadata:
name: otoroshi-service-certificate
spec:
description: certificate for otoroshi-service
autoRenew: true
csr:
issuer: CN=Otoroshi Root
hosts:
- otoroshi-service
- otoroshi-service.${namespace}.svc.cluster.local
- otoroshi-api-service.${namespace}.svc.cluster.local
- otoroshi.${domain}
- otoroshi-api.${domain}
- privateapps.${domain}
key:
algo: rsa
size: 2048
subject: uid=otoroshi-service-cert, O=Otoroshi
client: false
ca: false
duration: 31536000000
signatureAlg: SHA256WithRSAEncryption
digestAlg: SHA-256
frontend front_nodes_http
bind *:80
mode tcp
default_backend back_http_nodes
timeout client 1m
frontend front_nodes_https
bind *:443
mode tcp
default_backend back_https_nodes
timeout client 1m
backend back_http_nodes
mode tcp
balance roundrobin
server kubernetes-node1 10.2.2.40:41080
server kubernetes-node2 10.2.2.41:41080
server kubernetes-node3 10.2.2.42:41080
timeout connect 10s
timeout server 1m
backend back_https_nodes
mode tcp
balance roundrobin
server kubernetes-node1 10.2.2.40:41443
server kubernetes-node2 10.2.2.41:41443
server kubernetes-node3 10.2.2.42:41443
timeout connect 10s
timeout server 1m
stream {
upstream back_http_nodes {
zone back_http_nodes 64k;
server 10.2.2.40:41080 max_fails=1;
server 10.2.2.41:41080 max_fails=1;
server 10.2.2.42:41080 max_fails=1;
}
upstream back_https_nodes {
zone back_https_nodes 64k;
server 10.2.2.40:41443 max_fails=1;
server 10.2.2.41:41443 max_fails=1;
server 10.2.2.42:41443 max_fails=1;
}
server {
listen 80;
proxy_pass back_http_nodes;
health_check;
}
server {
listen 443;
proxy_pass back_https_nodes;
health_check;
}
}
# if your loadbalancer is at ip address 10.2.2.50
otoroshi.your.otoroshi.domain IN A 10.2.2.50
otoroshi-api.your.otoroshi.domain IN A 10.2.2.50
privateapps.your.otoroshi.domain IN A 10.2.2.50
api1.another.domain IN A 10.2.2.50
api2.another.domain IN A 10.2.2.50
*.api.the.api.domain IN A 10.2.2.50
Deploy an otoroshi cluster on a cloud provider managed kubernetes cluster
Here we have 2 replicas of an otoroshi leader connected to a redis instance and 2 replicas of an otoroshi worker connected to the leader. We use a service of type LoadBalancer to expose otoroshi leader/worker to the rest of the world. You have to setup your DNS to bind otoroshi domain names to the LoadBalancer external CNAME (see the example below)
- deployment.yaml
- dns.example
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: otoroshi-leader-deployment
spec:
selector:
matchLabels:
run: otoroshi-leader-deployment
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
run: otoroshi-leader-deployment
spec:
serviceAccountName: otoroshi-admin-user
terminationGracePeriodSeconds: 60
hostNetwork: false
restartPolicy: Always
containers:
- image: maif/otoroshi:17.14.0-jdk11
imagePullPolicy: IfNotPresent
name: otoroshi-leader
ports:
- containerPort: 8080
name: "http"
protocol: TCP
- containerPort: 8443
name: "https"
protocol: TCP
env:
- name: APP_STORAGE_ROOT
value: otoroshi
- name: OTOROSHI_INITIAL_ADMIN_PASSWORD
value: ${password}
- name: APP_DOMAIN
value: ${domain}
- name: APP_STORAGE
value: lettuce
- name: REDIS_URL
value: ${redisUrl}
# value: redis://redis-leader-service:6379/0
- name: CLUSTER_MODE
value: Leader
- name: CLUSTER_AUTO_UPDATE_STATE
value: 'true'
- name: CLUSTER_MTLS_ENABLED
value: 'true'
- name: CLUSTER_MTLS_LOOSE
value: 'true'
- name: CLUSTER_MTLS_TRUST_ALL
value: 'true'
- name: CLUSTER_LEADER_URL
value: https://otoroshi-leader-api-service.${namespace}.svc.cluster.local:8443
- name: CLUSTER_LEADER_HOST
value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- name: ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN
value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- name: ADMIN_API_CLIENT_ID
value: ${clientId}
- name: CLUSTER_LEADER_CLIENT_ID
value: ${clientId}
- name: ADMIN_API_CLIENT_SECRET
value: ${clientSecret}
- name: CLUSTER_LEADER_CLIENT_SECRET
value: ${clientSecret}
- name: OTOROSHI_SECRET
value: ${otoroshiSecret}
- name: HEALTH_LIMIT
value: "5000"
- name: SSL_OUTSIDE_CLIENT_AUTH
value: Want
- name: HTTPS_WANT_CLIENT_AUTH
value: "true"
- name: OTOROSHI_INITIAL_CUSTOMIZATION
value: >
{
"config":{
"tlsSettings": {
"defaultDomain": "www.${domain}",
"randomIfNotFound": false
},
"scripts":{
"enabled":true,
"sinkRefs":[
"cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookCRDValidator",
"cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookSidecarInjector"
],
"sinkConfig": {},
"jobRefs":[
"cp:otoroshi.plugins.jobs.kubernetes.KubernetesOtoroshiCRDsControllerJob"
],
"jobConfig":{
"KubernetesConfig": {
"trust": false,
"namespaces": [
"*"
],
"labels": {},
"namespacesLabels": {},
"ingressClasses": [
"otoroshi"
],
"defaultGroup": "default",
"ingresses": false,
"crds": true,
"coreDnsIntegration": false,
"coreDnsIntegrationDryRun": false,
"kubeLeader": false,
"restartDependantDeployments": false,
"watch": false,
"syncDaikokuApikeysOnly": false,
"kubeSystemNamespace": "kube-system",
"coreDnsConfigMapName": "coredns",
"coreDnsDeploymentName": "coredns",
"corednsPort": 53,
"otoroshiServiceName": "otoroshi-worker-service",
"otoroshiNamespace": "${namespace}",
"clusterDomain": "cluster.local",
"syncIntervalSeconds": 60,
"coreDnsEnv": null,
"watchTimeoutSeconds": 60,
"watchGracePeriodSeconds": 5,
"mutatingWebhookName": "otoroshi-admission-webhook-injector",
"validatingWebhookName": "otoroshi-admission-webhook-validation",
"templates": {
"service-group": {},
"service-descriptor": {},
"apikeys": {},
"global-config": {},
"jwt-verifier": {},
"tcp-service": {},
"certificate": {},
"auth-module": {},
"script": {},
"organizations": {},
"teams": {},
"webhooks": {
"flags": {
"requestCert": true,
"originCheck": true,
"tokensCheck": true,
"displayEnv": false,
"tlsTrace": false
}
}
}
}
}
}
}
}
- name: JAVA_OPTS
value: '-Xms2g -Xmx4g -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0'
readinessProbe:
httpGet:
path: /ready
port: 8080
failureThreshold: 1
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
httpGet:
path: /live
port: 8080
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: otoroshi-worker-deployment
spec:
selector:
matchLabels:
run: otoroshi-worker-deployment
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
run: otoroshi-worker-deployment
spec:
serviceAccountName: otoroshi-admin-user
terminationGracePeriodSeconds: 60
hostNetwork: false
restartPolicy: Always
containers:
- image: maif/otoroshi:16.8.0-rc.1-dev
imagePullPolicy: IfNotPresent
name: otoroshi-worker
ports:
- containerPort: 8080
name: "http"
protocol: TCP
- containerPort: 8443
name: "https"
protocol: TCP
env:
- name: APP_STORAGE_ROOT
value: otoroshi
- name: OTOROSHI_INITIAL_ADMIN_PASSWORD
value: ${password}
- name: APP_DOMAIN
value: ${domain}
- name: CLUSTER_MODE
value: Worker
- name: CLUSTER_AUTO_UPDATE_STATE
value: 'true'
- name: CLUSTER_MTLS_ENABLED
value: 'true'
- name: CLUSTER_MTLS_LOOSE
value: 'true'
- name: CLUSTER_MTLS_TRUST_ALL
value: 'true'
- name: CLUSTER_LEADER_URL
value: https://otoroshi-leader-api-service.${namespace}.svc.cluster.local:8443
- name: CLUSTER_LEADER_HOST
value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- name: ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN
value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- name: ADMIN_API_CLIENT_ID
value: ${clientId}
- name: CLUSTER_LEADER_CLIENT_ID
value: ${clientId}
- name: ADMIN_API_CLIENT_SECRET
value: ${clientSecret}
- name: CLUSTER_LEADER_CLIENT_SECRET
value: ${clientSecret}
- name: HEALTH_LIMIT
value: "5000"
- name: SSL_OUTSIDE_CLIENT_AUTH
value: Want
- name: HTTPS_WANT_CLIENT_AUTH
value: "true"
- name: JAVA_OPTS
value: '-Xms2g -Xmx4g -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0'
readinessProbe:
httpGet:
path: /ready
port: 8080
failureThreshold: 1
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
httpGet:
path: /live
port: 8080
failureThreshold: 3
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
---
apiVersion: v1
kind: Service
metadata:
name: otoroshi-leader-api-service
spec:
selector:
run: otoroshi-leader-deployment
ports:
- port: 8080
name: "http"
targetPort: "http"
- port: 8443
name: "https"
targetPort: "https"
---
apiVersion: v1
kind: Service
metadata:
name: otoroshi-leader-service
spec:
selector:
run: otoroshi-leader-deployment
ports:
- port: 8080
name: "http"
targetPort: "http"
- port: 8443
name: "https"
targetPort: "https"
---
apiVersion: v1
kind: Service
metadata:
name: otoroshi-worker-service
spec:
selector:
run: otoroshi-worker-deployment
ports:
- port: 8080
name: "http"
targetPort: "http"
- port: 8443
name: "https"
targetPort: "https"
---
apiVersion: v1
kind: Service
metadata:
name: otoroshi-leader-external-service
spec:
type: LoadBalancer
selector:
run: otoroshi-leader-deployment
ports:
- port: 80
name: "http"
targetPort: "http"
- port: 443
name: "https"
targetPort: "https"
---
apiVersion: v1
kind: Service
metadata:
name: otoroshi-worker-external-service
spec:
type: LoadBalancer
selector:
run: otoroshi-worker-deployment
ports:
- port: 80
name: "http"
targetPort: "http"
- port: 443
name: "https"
targetPort: "https"
---
apiVersion: proxy.otoroshi.io/v1
kind: Certificate
metadata:
name: otoroshi-service-certificate
spec:
description: certificate for otoroshi-service
autoRenew: true
csr:
issuer: CN=Otoroshi Root
hosts:
- otoroshi-service
- otoroshi-service.${namespace}.svc.cluster.local
- otoroshi-api-service.${namespace}.svc.cluster.local
- otoroshi.${domain}
- otoroshi-api.${domain}
- privateapps.${domain}
key:
algo: rsa
size: 2048
subject: uid=otoroshi-service-cert, O=Otoroshi
client: false
ca: false
duration: 31536000000
signatureAlg: SHA256WithRSAEncryption
digestAlg: SHA-256
otoroshi.your.otoroshi.domain IN CNAME generated.cname.for.leader.of.your.cluster.loadbalancer
otoroshi-api.your.otoroshi.domain IN CNAME generated.cname.for.leader.of.your.cluster.loadbalancer
privateapps.your.otoroshi.domain IN CNAME generated.cname.for.leader.of.your.cluster.loadbalancer
api1.another.domain IN CNAME generated.cname.for.worker.of.your.cluster.loadbalancer
api2.another.domain IN CNAME generated.cname.for.worker.of.your.cluster.loadbalancer
*.api.the.api.domain IN CNAME generated.cname.for.worker.of.your.cluster.loadbalancer
Deploy an otoroshi cluster on a bare metal kubernetes cluster
Here we have 2 replicas of otoroshi leader connected to the same redis instance and 2 replicas for otoroshi worker. The otoroshi instances are exposed as nodePort so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below).
- deployment.yaml
- nginx.example
- dns.example
- dns.example
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: otoroshi-leader-deployment
spec:
selector:
matchLabels:
run: otoroshi-leader-deployment
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
run: otoroshi-leader-deployment
spec:
serviceAccountName: otoroshi-admin-user
terminationGracePeriodSeconds: 60
hostNetwork: false
restartPolicy: Always
containers:
- image: maif/otoroshi:17.14.0-jdk11
imagePullPolicy: IfNotPresent
name: otoroshi-leader
ports:
- containerPort: 8080
name: "http"
protocol: TCP
- containerPort: 8443
name: "https"
protocol: TCP
env:
- name: APP_STORAGE_ROOT
value: otoroshi
- name: OTOROSHI_INITIAL_ADMIN_PASSWORD
value: ${password}
- name: APP_DOMAIN
value: ${domain}
- name: APP_STORAGE
value: lettuce
- name: REDIS_URL
value: ${redisUrl}
# value: redis://redis-leader-service:6379/0
- name: CLUSTER_MODE
value: Leader
- name: CLUSTER_AUTO_UPDATE_STATE
value: 'true'
- name: CLUSTER_MTLS_ENABLED
value: 'true'
- name: CLUSTER_MTLS_LOOSE
value: 'true'
- name: CLUSTER_MTLS_TRUST_ALL
value: 'true'
- name: CLUSTER_LEADER_URL
value: https://otoroshi-leader-api-service.${namespace}.svc.cluster.local:8443
- name: CLUSTER_LEADER_HOST
value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- name: ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN
value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- name: ADMIN_API_CLIENT_ID
value: ${clientId}
- name: CLUSTER_LEADER_CLIENT_ID
value: ${clientId}
- name: ADMIN_API_CLIENT_SECRET
value: ${clientSecret}
- name: CLUSTER_LEADER_CLIENT_SECRET
value: ${clientSecret}
- name: OTOROSHI_SECRET
value: ${otoroshiSecret}
- name: HEALTH_LIMIT
value: "5000"
- name: SSL_OUTSIDE_CLIENT_AUTH
value: Want
- name: HTTPS_WANT_CLIENT_AUTH
value: "true"
- name: OTOROSHI_INITIAL_CUSTOMIZATION
value: >
{
"config":{
"tlsSettings": {
"defaultDomain": "www.${domain}",
"randomIfNotFound": false
},
"scripts":{
"enabled":true,
"sinkRefs":[
"cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookCRDValidator",
"cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookSidecarInjector"
],
"sinkConfig": {},
"jobRefs":[
"cp:otoroshi.plugins.jobs.kubernetes.KubernetesOtoroshiCRDsControllerJob"
],
"jobConfig":{
"KubernetesConfig": {
"trust": false,
"namespaces": [
"*"
],
"labels": {},
"namespacesLabels": {},
"ingressClasses": [
"otoroshi"
],
"defaultGroup": "default",
"ingresses": false,
"crds": true,
"coreDnsIntegration": false,
"coreDnsIntegrationDryRun": false,
"kubeLeader": false,
"restartDependantDeployments": false,
"watch": false,
"syncDaikokuApikeysOnly": false,
"kubeSystemNamespace": "kube-system",
"coreDnsConfigMapName": "coredns",
"coreDnsDeploymentName": "coredns",
"corednsPort": 53,
"otoroshiServiceName": "otoroshi-worker-service",
"otoroshiNamespace": "${namespace}",
"clusterDomain": "cluster.local",
"syncIntervalSeconds": 60,
"coreDnsEnv": null,
"watchTimeoutSeconds": 60,
"watchGracePeriodSeconds": 5,
"mutatingWebhookName": "otoroshi-admission-webhook-injector",
"validatingWebhookName": "otoroshi-admission-webhook-validation",
"templates": {
"service-group": {},
"service-descriptor": {},
"apikeys": {},
"global-config": {},
"jwt-verifier": {},
"tcp-service": {},
"certificate": {},
"auth-module": {},
"script": {},
"organizations": {},
"teams": {},
"webhooks": {
"flags": {
"requestCert": true,
"originCheck": true,
"tokensCheck": true,
"displayEnv": false,
"tlsTrace": false
}
}
}
}
}
}
}
}
- name: JAVA_OPTS
value: '-Xms2g -Xmx4g -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0'
readinessProbe:
httpGet:
path: /ready
port: 8080
failureThreshold: 1
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
httpGet:
path: /live
port: 8080
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: otoroshi-worker-deployment
spec:
selector:
matchLabels:
run: otoroshi-worker-deployment
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
run: otoroshi-worker-deployment
spec:
serviceAccountName: otoroshi-admin-user
terminationGracePeriodSeconds: 60
hostNetwork: false
restartPolicy: Always
containers:
- image: maif/otoroshi:16.20.1-dev-dev
imagePullPolicy: IfNotPresent
name: otoroshi-worker
ports:
- containerPort: 8080
name: "http"
protocol: TCP
- containerPort: 8443
name: "https"
protocol: TCP
env:
- name: APP_STORAGE_ROOT
value: otoroshi
- name: OTOROSHI_INITIAL_ADMIN_PASSWORD
value: ${password}
- name: APP_DOMAIN
value: ${domain}
- name: CLUSTER_MODE
value: Worker
- name: CLUSTER_AUTO_UPDATE_STATE
value: 'true'
- name: CLUSTER_MTLS_ENABLED
value: 'true'
- name: CLUSTER_MTLS_LOOSE
value: 'true'
- name: CLUSTER_MTLS_TRUST_ALL
value: 'true'
- name: CLUSTER_LEADER_URL
value: https://otoroshi-leader-api-service.${namespace}.svc.cluster.local:8443
- name: CLUSTER_LEADER_HOST
value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- name: ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN
value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- name: ADMIN_API_CLIENT_ID
value: ${clientId}
- name: CLUSTER_LEADER_CLIENT_ID
value: ${clientId}
- name: ADMIN_API_CLIENT_SECRET
value: ${clientSecret}
- name: CLUSTER_LEADER_CLIENT_SECRET
value: ${clientSecret}
- name: HEALTH_LIMIT
value: "5000"
- name: SSL_OUTSIDE_CLIENT_AUTH
value: Want
- name: HTTPS_WANT_CLIENT_AUTH
value: "true"
- name: JAVA_OPTS
value: '-Xms2g -Xmx4g -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0'
readinessProbe:
httpGet:
path: /ready
port: 8080
failureThreshold: 1
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
httpGet:
path: /live
port: 8080
failureThreshold: 3
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
---
apiVersion: v1
kind: Service
metadata:
name: otoroshi-leader-api-service
spec:
selector:
run: otoroshi-leader-deployment
ports:
- port: 8080
name: "http"
targetPort: "http"
- port: 8443
name: "https"
targetPort: "https"
---
apiVersion: v1
kind: Service
metadata:
name: otoroshi-leader-service
spec:
selector:
run: otoroshi-leader-deployment
ports:
- port: 8080
nodePort: 31080
name: "http"
targetPort: "http"
- port: 8443
nodePort: 31443
name: "https"
targetPort: "https"
---
apiVersion: v1
kind: Service
metadata:
name: otoroshi-worker-service
spec:
selector:
run: otoroshi-worker-deployment
ports:
- port: 8080
nodePort: 32080
name: "http"
targetPort: "http"
- port: 8443
nodePort: 32443
name: "https"
targetPort: "https"
---
apiVersion: proxy.otoroshi.io/v1
kind: Certificate
metadata:
name: otoroshi-service-certificate
spec:
description: certificate for otoroshi-service
autoRenew: true
csr:
issuer: CN=Otoroshi Root
hosts:
- otoroshi-service
- otoroshi-service.${namespace}.svc.cluster.local
- otoroshi-api-service.${namespace}.svc.cluster.local
- otoroshi.${domain}
- otoroshi-api.${domain}
- privateapps.${domain}
key:
algo: rsa
size: 2048
subject: uid=otoroshi-service-cert, O=Otoroshi
client: false
ca: false
duration: 31536000000
signatureAlg: SHA256WithRSAEncryption
digestAlg: SHA-256
stream {
upstream worker_http_nodes {
zone worker_http_nodes 64k;
server 10.2.2.40:32080 max_fails=1;
server 10.2.2.41:32080 max_fails=1;
server 10.2.2.42:32080 max_fails=1;
}
upstream worker_https_nodes {
zone worker_https_nodes 64k;
server 10.2.2.40:32443 max_fails=1;
server 10.2.2.41:32443 max_fails=1;
server 10.2.2.42:32443 max_fails=1;
}
upstream leader_http_nodes {
zone leader_http_nodes 64k;
server 10.2.2.40:31080 max_fails=1;
server 10.2.2.41:31080 max_fails=1;
server 10.2.2.42:31080 max_fails=1;
}
upstream leader_https_nodes {
zone leader_https_nodes 64k;
server 10.2.2.40:31443 max_fails=1;
server 10.2.2.41:31443 max_fails=1;
server 10.2.2.42:31443 max_fails=1;
}
server {
listen 80;
proxy_pass worker_http_nodes;
health_check;
}
server {
listen 443;
proxy_pass worker_https_nodes;
health_check;
}
server {
listen 81;
proxy_pass leader_http_nodes;
health_check;
}
server {
listen 444;
proxy_pass leader_https_nodes;
health_check;
}
}
# if your loadbalancer is at ip address 10.2.2.50
otoroshi.your.otoroshi.domain IN A 10.2.2.50
otoroshi-api.your.otoroshi.domain IN A 10.2.2.50
privateapps.your.otoroshi.domain IN A 10.2.2.50
api1.another.domain IN A 10.2.2.50
api2.another.domain IN A 10.2.2.50
*.api.the.api.domain IN A 10.2.2.50
# if your loadbalancer is at ip address 10.2.2.50
otoroshi.your.otoroshi.domain IN A 10.2.2.50
otoroshi-api.your.otoroshi.domain IN A 10.2.2.50
privateapps.your.otoroshi.domain IN A 10.2.2.50
api1.another.domain IN A 10.2.2.50
api2.another.domain IN A 10.2.2.50
*.api.the.api.domain IN A 10.2.2.50
Deploy an otoroshi cluster on a bare metal kubernetes cluster using DaemonSet
Here we have 1 otoroshi leader instance on each kubernetes node (with the otoroshi-kind: leader label) connected to the same redis instance and 1 otoroshi worker instance on each kubernetes node (with the otoroshi-kind: worker label). The otoroshi instances are exposed as nodePort so you'll have to add a loadbalancer in front of your kubernetes nodes to route external traffic (TCP) to your otoroshi instances. You have to setup your DNS to bind otoroshi domain names to your loadbalancer (see the example below).
- deployment.yaml
- nginx.example
- dns.example
- dns.example
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: otoroshi-leader-deployment
spec:
selector:
matchLabels:
run: otoroshi-leader-deployment
template:
metadata:
labels:
run: otoroshi-leader-deployment
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: otoroshi-kind
operator: In
values:
- leader
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: otoroshi-admin-user
terminationGracePeriodSeconds: 60
hostNetwork: false
restartPolicy: Always
containers:
- image: maif/otoroshi:17.14.0-jdk11
imagePullPolicy: IfNotPresent
name: otoroshi-leader
ports:
- containerPort: 8080
hostPort: 41080
name: "http"
protocol: TCP
- containerPort: 8443
hostPort: 41443
name: "https"
protocol: TCP
env:
- name: APP_STORAGE_ROOT
value: otoroshi
- name: OTOROSHI_INITIAL_ADMIN_PASSWORD
value: ${password}
- name: APP_DOMAIN
value: ${domain}
- name: APP_STORAGE
value: lettuce
- name: REDIS_URL
value: ${redisUrl}
# value: redis://redis-leader-service:6379/0
- name: CLUSTER_MODE
value: Leader
- name: CLUSTER_AUTO_UPDATE_STATE
value: 'true'
- name: CLUSTER_MTLS_ENABLED
value: 'true'
- name: CLUSTER_MTLS_LOOSE
value: 'true'
- name: CLUSTER_MTLS_TRUST_ALL
value: 'true'
- name: CLUSTER_LEADER_URL
value: https://otoroshi-leader-api-service.${namespace}.svc.cluster.local:8443
- name: CLUSTER_LEADER_HOST
value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- name: ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN
value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- name: ADMIN_API_CLIENT_ID
value: ${clientId}
- name: CLUSTER_LEADER_CLIENT_ID
value: ${clientId}
- name: ADMIN_API_CLIENT_SECRET
value: ${clientSecret}
- name: CLUSTER_LEADER_CLIENT_SECRET
value: ${clientSecret}
- name: OTOROSHI_SECRET
value: ${otoroshiSecret}
- name: HEALTH_LIMIT
value: "5000"
- name: SSL_OUTSIDE_CLIENT_AUTH
value: Want
- name: HTTPS_WANT_CLIENT_AUTH
value: "true"
- name: OTOROSHI_INITIAL_CUSTOMIZATION
value: >
{
"config":{
"tlsSettings": {
"defaultDomain": "www.${domain}",
"randomIfNotFound": false
},
"scripts":{
"enabled":true,
"sinkRefs":[
"cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookCRDValidator",
"cp:otoroshi.plugins.jobs.kubernetes.KubernetesAdmissionWebhookSidecarInjector"
],
"sinkConfig": {},
"jobRefs":[
"cp:otoroshi.plugins.jobs.kubernetes.KubernetesOtoroshiCRDsControllerJob"
],
"jobConfig":{
"KubernetesConfig": {
"trust": false,
"namespaces": [
"*"
],
"labels": {},
"namespacesLabels": {},
"ingressClasses": [
"otoroshi"
],
"defaultGroup": "default",
"ingresses": false,
"crds": true,
"coreDnsIntegration": false,
"coreDnsIntegrationDryRun": false,
"kubeLeader": false,
"restartDependantDeployments": false,
"watch": false,
"syncDaikokuApikeysOnly": false,
"kubeSystemNamespace": "kube-system",
"coreDnsConfigMapName": "coredns",
"coreDnsDeploymentName": "coredns",
"corednsPort": 53,
"otoroshiServiceName": "otoroshi-worker-service",
"otoroshiNamespace": "${namespace}",
"clusterDomain": "cluster.local",
"syncIntervalSeconds": 60,
"coreDnsEnv": null,
"watchTimeoutSeconds": 60,
"watchGracePeriodSeconds": 5,
"mutatingWebhookName": "otoroshi-admission-webhook-injector",
"validatingWebhookName": "otoroshi-admission-webhook-validation",
"templates": {
"service-group": {},
"service-descriptor": {},
"apikeys": {},
"global-config": {},
"jwt-verifier": {},
"tcp-service": {},
"certificate": {},
"auth-module": {},
"script": {},
"organizations": {},
"teams": {},
"webhooks": {
"flags": {
"requestCert": true,
"originCheck": true,
"tokensCheck": true,
"displayEnv": false,
"tlsTrace": false
}
}
}
}
}
}
}
}
- name: JAVA_OPTS
value: '-Xms2g -Xmx4g -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0'
readinessProbe:
httpGet:
path: /ready
port: 8080
failureThreshold: 1
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
httpGet:
path: /live
port: 8080
failureThreshold: 3
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: otoroshi-worker-deployment
spec:
selector:
matchLabels:
run: otoroshi-worker-deployment
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
run: otoroshi-worker-deployment
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: otoroshi-kind
operator: In
values:
- worker
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: otoroshi-admin-user
terminationGracePeriodSeconds: 60
hostNetwork: false
restartPolicy: Always
containers:
- image: maif/otoroshi:16.20.1-dev-dev
imagePullPolicy: IfNotPresent
name: otoroshi-worker
ports:
- containerPort: 8080
hostPort: 42080
name: "http"
protocol: TCP
- containerPort: 8443
hostPort: 42443
name: "https"
protocol: TCP
env:
- name: APP_STORAGE_ROOT
value: otoroshi
- name: OTOROSHI_INITIAL_ADMIN_PASSWORD
value: ${password}
- name: APP_DOMAIN
value: ${domain}
- name: CLUSTER_MODE
value: Worker
- name: CLUSTER_AUTO_UPDATE_STATE
value: 'true'
- name: CLUSTER_MTLS_ENABLED
value: 'true'
- name: CLUSTER_MTLS_LOOSE
value: 'true'
- name: CLUSTER_MTLS_TRUST_ALL
value: 'true'
- name: CLUSTER_LEADER_URL
value: https://otoroshi-leader-api-service.${namespace}.svc.cluster.local:8443
- name: CLUSTER_LEADER_HOST
value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- name: ADMIN_API_ADDITIONAL_EXPOSED_DOMAIN
value: otoroshi-leader-api-service.${namespace}.svc.cluster.local
- name: ADMIN_API_CLIENT_ID
value: ${clientId}
- name: CLUSTER_LEADER_CLIENT_ID
value: ${clientId}
- name: ADMIN_API_CLIENT_SECRET
value: ${clientSecret}
- name: CLUSTER_LEADER_CLIENT_SECRET
value: ${clientSecret}
- name: HEALTH_LIMIT
value: "5000"
- name: SSL_OUTSIDE_CLIENT_AUTH
value: Want
- name: HTTPS_WANT_CLIENT_AUTH
value: "true"
- name: JAVA_OPTS
value: '-Xms2g -Xmx4g -XX:+UseContainerSupport -XX:MaxRAMPercentage=80.0'
readinessProbe:
httpGet:
path: /ready
port: 8080
failureThreshold: 1
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
httpGet:
path: /live
port: 8080
failureThreshold: 3
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
---
apiVersion: v1
kind: Service
metadata:
name: otoroshi-leader-api-service
spec:
selector:
run: otoroshi-leader-deployment
ports:
- port: 8080
name: "http"
targetPort: "http"
- port: 8443
name: "https"
targetPort: "https"
---
apiVersion: v1
kind: Service
metadata:
name: otoroshi-leader-service
spec:
selector:
run: otoroshi-leader-deployment
ports:
- port: 8080
name: "http"
targetPort: "http"
- port: 8443
name: "https"
targetPort: "https"
---
apiVersion: v1
kind: Service
metadata:
name: otoroshi-worker-service
spec:
selector:
run: otoroshi-worker-deployment
ports:
- port: 8080
name: "http"
targetPort: "http"
- port: 8443
name: "https"
targetPort: "https"
---
apiVersion: proxy.otoroshi.io/v1
kind: Certificate
metadata:
name: otoroshi-service-certificate
spec:
description: certificate for otoroshi-service
autoRenew: true
csr:
issuer: CN=Otoroshi Root
hosts:
- otoroshi-service
- otoroshi-service.${namespace}.svc.cluster.local
- otoroshi-api-service.${namespace}.svc.cluster.local
- otoroshi.${domain}
- otoroshi-api.${domain}
- privateapps.${domain}
key:
algo: rsa
size: 2048
subject: uid=otoroshi-service-cert, O=Otoroshi
client: false
ca: false
duration: 31536000000
signatureAlg: SHA256WithRSAEncryption
digestAlg: SHA-256
stream {
upstream worker_http_nodes {
zone worker_http_nodes 64k;
server 10.2.2.40:42080 max_fails=1;
server 10.2.2.41:42080 max_fails=1;
server 10.2.2.42:42080 max_fails=1;
}
upstream worker_https_nodes {
zone worker_https_nodes 64k;
server 10.2.2.40:42443 max_fails=1;
server 10.2.2.41:42443 max_fails=1;
server 10.2.2.42:42443 max_fails=1;
}
upstream leader_http_nodes {
zone leader_http_nodes 64k;
server 10.2.2.40:41080 max_fails=1;
server 10.2.2.41:41080 max_fails=1;
server 10.2.2.42:41080 max_fails=1;
}
upstream leader_https_nodes {
zone leader_https_nodes 64k;
server 10.2.2.40:41443 max_fails=1;
server 10.2.2.41:41443 max_fails=1;
server 10.2.2.42:41443 max_fails=1;
}
server {
listen 80;
proxy_pass worker_http_nodes;
health_check;
}
server {
listen 443;
proxy_pass worker_https_nodes;
health_check;
}
server {
listen 81;
proxy_pass leader_http_nodes;
health_check;
}
server {
listen 444;
proxy_pass leader_https_nodes;
health_check;
}
}
# if your loadbalancer is at ip address 10.2.2.50
otoroshi.your.otoroshi.domain IN A 10.2.2.50
otoroshi-api.your.otoroshi.domain IN A 10.2.2.50
privateapps.your.otoroshi.domain IN A 10.2.2.50
api1.another.domain IN A 10.2.2.50
api2.another.domain IN A 10.2.2.50
*.api.the.api.domain IN A 10.2.2.50
# if your loadbalancer is at ip address 10.2.2.50
otoroshi.your.otoroshi.domain IN A 10.2.2.50
otoroshi-api.your.otoroshi.domain IN A 10.2.2.50
privateapps.your.otoroshi.domain IN A 10.2.2.50
api1.another.domain IN A 10.2.2.50
api2.another.domain IN A 10.2.2.50
*.api.the.api.domain IN A 10.2.2.50
Using Otoroshi as an Ingress Controller
If you want to use Otoroshi as an Ingress Controller, just go to the danger zone, and in Global plugins add the job named Kubernetes Ingress Controller.
Then add the following configuration for the job (with your own tweaks of course)
{
"KubernetesConfig": {
"enabled": true,
"endpoint": "https://127.0.0.1:6443",
"token": "eyJhbGciOiJSUzI....F463SrpOehQRaQ",
"namespaces": [
"*"
]
}
}
the configuration can have the following values
{
"KubernetesConfig": {
"endpoint": "https://127.0.0.1:6443", // the endpoint to talk to the kubernetes api, optional
"token": "xxxx", // the bearer token to talk to the kubernetes api, optional
"userPassword": "user:password", // the user password tuple to talk to the kubernetes api, optional
"caCert": "/etc/ca.cert", // the ca cert file path to talk to the kubernetes api, optional
"trust": false, // trust any cert to talk to the kubernetes api, optional
"namespaces": ["*"], // the watched namespaces
"labels": ["label"], // the watched namespaces
"ingressClasses": ["otoroshi"], // the watched kubernetes.io/ingress.class annotations, can be *
"defaultGroup": "default", // the group to put services in otoroshi
"ingresses": true, // sync ingresses
"crds": false, // sync crds
"kubeLeader": false, // delegate leader election to kubernetes, to know where the sync job should run
"restartDependantDeployments": true, // when a secret/cert changes from otoroshi sync, restart dependant deployments
"templates": { // template for entities that will be merged with kubernetes entities. can be "default" to use otoroshi default templates
"service-group": {},
"service-descriptor": {},
"apikeys": {},
"global-config": {},
"jwt-verifier": {},
"tcp-service": {},
"certificate": {},
"auth-module": {},
"data-exporter": {},
"script": {},
"organization": {},
"team": {},
"data-exporter": {},
"routes": {},
"route-compositions": {},
"backends": {}
}
}
}
If endpoint is not defined, Otoroshi will try to get it from $KUBERNETES_SERVICE_HOST and $KUBERNETES_SERVICE_PORT.
If token is not defined, Otoroshi will try to get it from the file at /var/run/secrets/kubernetes.io/serviceaccount/token.
If caCert is not defined, Otoroshi will try to get it from the file at /var/run/secrets/kubernetes.io/serviceaccount/ca.crt.
If $KUBECONFIG is defined, endpoint, token and caCert will be read from the current context of the file referenced by it.
Now you can deploy your first service ;)