Skip to main content

Scaling Otoroshi

Using multiple instances with a front load balancer

Otoroshi has been designed to work with multiple instances. If you already have an infrastructure using frontal load balancing, you just have to declare Otoroshi instances as the target of all domain names handled by Otoroshi

Using master / workers mode of Otoroshi

You can read everything about it in the clustering section of the documentation.

Using IPVS

You can use IPVS to load balance layer 4 traffic directly from the Linux Kernel to multiple instances of Otoroshi. You can find example of configuration here

Using DNS Round Robin

You can use DNS round robin technique to declare multiple A records under the domain names handled by Otoroshi.

Using software L4/L7 load balancers

You can use software L4 load balancers like NGINX or HAProxy to load balance layer 4 traffic directly from the Linux Kernel to multiple instances of Otoroshi.

upstream otoroshi {
server 192.168.1.40:8080 max_fails=1;
server 192.168.1.41:8080 max_fails=1;
server 192.168.1.42:8080 max_fails=1;
}

server {
listen 80;
# http://nginx.org/en/docs/http/server_names.html
server_name otoroshi.oto.tools otoroshi-api.oto.tools otoroshi-admin-internal-api.oto.tools privateapps.oto.tools *-api.oto.tools;
location / {
# SSE config
proxy_buffering off;
proxy_cache off;
proxy_set_header Connection '';
proxy_http_version 1.1;
chunked_transfer_encoding off;

# websockets config
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";

# other config
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://otoroshi;
}
}

server {
listen 443 ssl;
# http://nginx.org/en/docs/http/server_names.html
server_name otoroshi.oto.tools otoroshi-api.oto.tools otoroshi-admin-internal-api.oto.tools privateapps.oto.tools *-api.oto.tools;
ssl_certificate /etc/letsencrypt/wildcard.oto.tools/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/wildcard.oto.tools/privkey.pem;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
location / {
# SSE config
proxy_buffering off;
proxy_cache off;
proxy_set_header Connection '';
proxy_http_version 1.1;
chunked_transfer_encoding off;

# websockets config
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";

# other config
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://otoroshi;
}
}

Using a custom TCP load balancer

You can also use any other TCP load balancer, from a hardware box to a small js file like

const proxy = require("node-tcp-proxy");

const hosts = ["192.168.1.40", "192.168.1.41", "192.168.1.42"];
const portsHttp = [8080, 8080, 8080];
const portsHttps = [8443, 8443, 8443];

const proxyHttp = proxy.createProxy(80, hosts, portsHttp, {
tls: false
});

const proxyHttps = proxy.createProxy(443, hosts, portsHttps, {
tls: false
});