Skip to main content

Netty Server

Otoroshi uses a Netty-based HTTP server built on Reactor Netty as an alternative to Akka HTTP. This server supports HTTP/1.1, HTTP/2 (including H2C cleartext), and HTTP/3 (QUIC). It is also the foundation for custom HTTP listeners.

Enable the server

To enable the Netty server, set the following configuration:

otoroshi.next.experimental.netty-server.enabled = true

or via environment variable:

OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_ENABLED=true

On startup, you should see something like:

root [info] otoroshi-experimental-netty-server -
root [info] otoroshi-experimental-netty-server - Starting the experimental Netty Server !!!
root [info] otoroshi-experimental-netty-server -
root [info] otoroshi-experimental-netty-server - https://0.0.0.0:10048 (HTTP/1.1, HTTP/2)
root [info] otoroshi-experimental-netty-server - http://0.0.0.0:10049 (HTTP/1.1, HTTP/2 H2C)
root [info] otoroshi-experimental-netty-server -

How it works

The Netty server starts up to three separate server instances:

ServerTransportPort (default)ProtocolsDescription
HTTPSTCP10048HTTP/1.1, HTTP/2TLS-terminated with ALPN negotiation for HTTP/2
HTTPTCP10049HTTP/1.1, H2CCleartext HTTP with optional HTTP/2 cleartext upgrade
HTTP/3UDP (QUIC)10048HTTP/3Separate QUIC-based server (can share the same port number as HTTPS since it uses UDP)

Each server uses its own event loop group. All servers share the same dynamic TLS engine (DynamicSSLEngineProvider) for automatic SNI-based certificate selection.

Server configuration

General settings

Config keyDefaultEnv variableDescription
enabledfalseOTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_ENABLEDEnable the Netty server
new-engine-onlyfalseOTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_NEW_ENGINE_ONLYOnly use the new proxy engine (skip Play routing)
host0.0.0.0OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HOSTNetwork interface to bind to
http-port10049OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_PORTCleartext HTTP port
exposed-http-portsame as http-portOTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_EXPOSED_HTTP_PORTExternally visible HTTP port (behind load balancer or in containers)
https-port10048OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTPS_PORTTLS HTTPS port
exposed-https-portsame as https-portOTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_EXPOSED_HTTPS_PORTExternally visible HTTPS port
threads0OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_THREADSNumber of worker threads. 0 = auto (based on CPU cores)
wiretapfalseOTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_WIRETAPEnable wire-level debug logging of all Netty channel operations
accesslogfalseOTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_ACCESSLOGEnable access logging (see access logging)

Example configuration

otoroshi.next.experimental.netty-server {
enabled = true
host = "0.0.0.0"
http-port = 10049
https-port = 10048
threads = 0
accesslog = true
wiretap = false
http2 {
enabled = true
h2c = true
}
http3 {
enabled = true
port = 10048
}
native {
enabled = true
driver = "Auto"
}
}

HTTP protocol settings

HTTP/1.1

HTTP/1.1 is enabled by default. It supports keep-alive connections, chunked transfer encoding, and WebSocket upgrades.

HTTP/2

HTTP/2 is enabled by default over TLS via ALPN negotiation. H2C (HTTP/2 cleartext) is also enabled by default on the HTTP port.

Config keyDefaultEnv variableDescription
http2.enabledtrueOTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP2_ENABLEDEnable HTTP/2 over TLS
http2.h2ctrueOTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP2_H2CEnable HTTP/2 cleartext (H2C) on the HTTP port

HTTP/3 (QUIC)

HTTP/3 runs over the QUIC protocol using UDP. It uses netty-incubator-codec-quic and netty-incubator-codec-http3. The HTTP/3 port can be the same as the HTTPS port since QUIC uses UDP while HTTPS uses TCP.

Config keyDefaultEnv variableDescription
http3.enabledfalseOTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP3_ENABLEDEnable HTTP/3
http3.port10048OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP3_PORTUDP port for QUIC
http3.exposedPort10048OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP3_EXPOSED_PORTExternally visible HTTP/3 port
http3.maxSendUdpPayloadSize1500OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP3_MAX_SEND_UDP_PAYLOAD_SIZEMaximum outgoing UDP payload size (bytes)
http3.maxRecvUdpPayloadSize1500OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP3_MAX_RECV_UDP_PAYLOAD_SIZEMaximum incoming UDP payload size (bytes)
http3.initialMaxData10000000OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_INITIAL_MAX_DATAInitial flow control limit per connection (bytes)
http3.initialMaxStreamDataBidirectionalLocal1000000OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_INITIAL_MAX_STREAM_DATA_BIDIRECTIONAL_LOCALInitial flow control limit per locally-initiated stream (bytes)
http3.initialMaxStreamDataBidirectionalRemote1000000OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_INITIAL_MAX_STREAM_DATA_BIDIRECTIONAL_REMOTEInitial flow control limit per remotely-initiated stream (bytes)
http3.initialMaxStreamsBidirectional100000OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_INITIAL_MAX_STREAMS_BIDIRECTIONALMaximum number of concurrent bidirectional streams
http3.disableQpackDynamicTabletrueOTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_HTTP_3_DISABLE_QPACK_DYNAMIC_TABLEDisable QPACK dynamic table for header compression

When HTTP/3 is enabled, the startup log will show:

root [info] otoroshi-experimental-netty-server -
root [info] otoroshi-experimental-netty-server - Starting the experimental Netty Server !!!
root [info] otoroshi-experimental-netty-server -
root [info] otoroshi-experimental-netty-server - https://0.0.0.0:10048 (HTTP/3)
root [info] otoroshi-experimental-netty-server - https://0.0.0.0:10048 (HTTP/1.1, HTTP/2)
root [info] otoroshi-experimental-netty-server - http://0.0.0.0:10049 (HTTP/1.1, HTTP/2 H2C)
root [info] otoroshi-experimental-netty-server -

HTTP parser settings

These settings control how the Netty HTTP decoder processes incoming requests. The defaults follow Netty's HttpDecoderSpec values.

Config keyDefaultEnv variableDescription
parser.allowDuplicateContentLengthsfalseOTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_ALLOW_DUPLICATE_CONTENT_LENGTHSAllow duplicate Content-Length headers
parser.validateHeaderstrueOTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_VALIDATE_HEADERSValidate HTTP header names and values
parser.h2cMaxContentLength65536OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_H_2_C_MAX_CONTENT_LENGTHMaximum content length for H2C upgrade requests (bytes)
parser.initialBufferSize1024OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_INITIAL_BUFFER_SIZEInitial buffer size for the HTTP decoder (bytes)
parser.maxHeaderSize8192OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_MAX_HEADER_SIZEMaximum size of all HTTP headers combined (bytes)
parser.maxInitialLineLength4096OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_MAX_INITIAL_LINE_LENGTHMaximum length of the initial HTTP request line (bytes)
parser.maxChunkSize8192OTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_PARSER_MAX_CHUNK_SIZEMaximum size of a single HTTP chunk (bytes)

Native transport

Native transport provides higher performance by using OS-specific I/O mechanisms instead of Java NIO. It is enabled by default with automatic driver detection.

Config keyDefaultEnv variableDescription
native.enabledtrueOTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_NATIVE_ENABLEDEnable native transport
native.driverAutoOTOROSHI_NEXT_EXPERIMENTAL_NETTY_SERVER_NATIVE_DRIVERNative transport driver (see below)

Available drivers

DriverPlatformDescription
AutoAnyAutomatically selects the best available native transport for the current platform
EpollLinuxUses Linux epoll for high-performance I/O
KQueuemacOS / BSDUses BSD kqueue for high-performance I/O
IOUringLinux (5.1+)Uses Linux io_uring for asynchronous I/O (experimental, via netty-incubator-transport-io_uring)

If the selected native driver is not available on the platform, the server falls back to Java NIO.

When native transport is active, the log will show the driver being used:

root [info] otoroshi-experimental-netty-server -
root [info] otoroshi-experimental-netty-server - Starting the experimental Netty Server !!!
root [info] otoroshi-experimental-netty-server -
root [info] otoroshi-experimental-netty-server - using KQueue native transport
root [info] otoroshi-experimental-netty-server -
root [info] otoroshi-experimental-netty-server - https://0.0.0.0:10048 (HTTP/1.1, HTTP/2)
root [info] otoroshi-experimental-netty-server - http://0.0.0.0:10049 (HTTP/1.1, HTTP/2 H2C)
root [info] otoroshi-experimental-netty-server -

TLS configuration

The Netty server uses Otoroshi's dynamic TLS engine for automatic certificate selection based on SNI (Server Name Indication). TLS configuration is inherited from the global Otoroshi settings:

  • Cipher suites: from otoroshi.ssl.cipherSuites
  • TLS protocols: from otoroshi.ssl.protocols
  • Client authentication: configurable per server (via custom HTTP listeners), defaults from global SSL config

ALPN negotiation is used to select between HTTP/1.1 and HTTP/2 on the HTTPS port. The server prefers HTTP/2 (h2) when the client supports it, and falls back to http/1.1.

For HTTP/3, the QUIC stack uses its own TLS context with dynamic certificate loading per SNI domain and 0-RTT early data enabled.

Access logging

When accesslog is enabled, each request is logged using the following format:

{remote_address} - {user} [{date_time}] "{method} {uri} {protocol}" {status} {content_length} {duration_ms} {tls_version}

Example:

192.168.1.10 - - [13/Mar/2026:10:30:45 +0100] "GET /api/users HTTP/1.1" 200 1024 12 TLSv1.3

The TLS version field shows the negotiated TLS version (e.g., TLSv1.2, TLSv1.3) or - for cleartext connections.

Access logs are written via the reactor.netty.http.server.AccessLog logger.

WebSocket support

The Netty server fully supports WebSocket connections over HTTP/1.1 and HTTP/2. WebSocket upgrades are detected via the Upgrade and Sec-WebSocket-Version headers and routed to the proxy engine's WebSocket handler.

Trailer headers

The server supports HTTP/2 and HTTP/3 trailer headers. Trailers are stored in an internal cache (TTL: 10 seconds, max 1000 entries) and can be accessed asynchronously during request processing.

Thread pool

The server creates separate event loop groups for HTTP and HTTPS connections:

  • When threads = 0 (default): Reactor Netty automatically determines the number of threads based on available CPU cores
  • When threads > 0: the specified number of threads is used for each event loop group

The HTTP/3 server uses its own NioEventLoopGroup with the same thread count setting.

Relationship with custom HTTP listeners

The Netty server described here is the "experimental" built-in server started from the configuration file. Custom HTTP listeners use the same Netty infrastructure but can be created dynamically at runtime, each with its own port, protocol, and TLS settings. Custom listeners also support exclusive mode and route/plugin binding.