Skip to content

nginx Reverse Proxy

Running nginx as a container on RouterOS centralizes HTTP/HTTPS ingress for services on your LAN. Instead of maintaining per-service port-forwarding rules, a single nginx instance receives inbound requests and proxies them to the appropriate backend by hostname or path.

  • Reverse-proxy multiple LAN web services (NAS, home automation, cameras) through a single public IP and port
  • Terminate TLS at the nginx container, forwarding plain HTTP to internal backends
  • Add basic authentication or rate limiting without touching individual backend services
  • Host custom internal HTTP services alongside standard router functions
  • RouterOS v7.4 or later with container package installed
  • External storage (USB, SATA, or NVMe)
  • Device mode with container support enabled
  • A prepared nginx configuration file

Create a dedicated bridge and veth interface for the nginx container:

/interface/bridge/add name=br-proxy
/ip/address/add address=172.19.0.1/24 interface=br-proxy
/interface/veth/add name=veth-nginx address=172.19.0.2/24 gateway=172.19.0.1
/interface/bridge/port/add bridge=br-proxy interface=veth-nginx
/ip/firewall/nat/add chain=srcnat src-address=172.19.0.0/24 action=masquerade

Redirect inbound HTTP and HTTPS from the WAN interface to the nginx container:

/ip/firewall/nat/add chain=dstnat in-interface-list=WAN protocol=tcp dst-port=80 action=dst-nat to-addresses=172.19.0.2 to-ports=80
/ip/firewall/nat/add chain=dstnat in-interface-list=WAN protocol=tcp dst-port=443 action=dst-nat to-addresses=172.19.0.2 to-ports=443

If your firewall forward chain default policy is drop, add explicit accept rules:

/ip/firewall/filter/add chain=forward dst-address=172.19.0.2 protocol=tcp dst-port=80,443 action=accept place-before=0

nginx proxies requests to LAN backends (for example, 192.168.88.10). The container reaches LAN addresses through the RouterOS bridge if the br-proxy bridge is in the same L3 domain, or through the default gateway (172.19.0.1). Ensure firewall forward rules allow traffic from 172.19.0.0/24 to the target backend addresses.

nginx requires a configuration file before the container starts. Create a configuration directory on external storage and upload your config:

/file/add name=disk1/nginx/conf type=directory
/file/add name=disk1/nginx/logs type=directory

Create disk1/nginx/conf/nginx.conf via FTP, SCP, or the RouterOS file editor:

worker_processes auto;
events {
worker_connections 512;
}
http {
# Service A — route by hostname
server {
listen 80;
server_name serviceA.example.com;
location / {
proxy_pass http://192.168.88.10:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
# Service B — route by hostname
server {
listen 80;
server_name serviceB.example.com;
location / {
proxy_pass http://192.168.88.20:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
}

Adjust server_name, proxy_pass targets, and ports to match your environment. For TLS termination, add a listen 443 ssl block and provide certificate paths mounted from router storage.

/container/mounts/add name=nginx-conf src=disk1/nginx/conf dst=/etc/nginx
/container/mounts/add name=nginx-logs src=disk1/nginx/logs dst=/var/log/nginx
/container/config/set registry-url=https://registry-1.docker.io tmpdir=disk1/tmp
/container/add \
remote-image=nginx:alpine \
interface=veth-nginx \
root-dir=disk1/images/nginx \
mounts=nginx-conf,nginx-logs \
start-on-boot=yes \
logging=yes \
name=nginx-proxy

Wait for image extraction to complete:

/container/print

Status changes to stopped when extraction finishes.

/container/start nginx-proxy

Verify it is running:

/container/print

The nginx Alpine image is lightweight and well-suited for RouterOS hardware:

ResourceTypical Usage
RAM20–50 MB
CPULow (< 5% at typical LAN load)
Storage~10 MB image

Apply resource limits to protect the router:

/container/set nginx-proxy memory-high=64MiB cpu-list=0

See Container Resource Limits for details.

The most common cause is a malformed nginx.conf. Test your config before deploying using nginx’s check mode. If already deployed, check container logs:

/log print where topics~"container"

Confirm the container is running and the veth has its address:

/container/print
/interface/veth/print

Verify the dst-nat rules are present and in-interface-list is correct for your WAN interface:

/ip/firewall/nat/print

Check that the router forwards traffic from 172.19.0.0/24 to backend addresses. Add a firewall forward accept rule if needed:

/ip/firewall/filter/add chain=forward src-address=172.19.0.0/24 dst-address=192.168.88.0/24 action=accept place-before=0