Skip to content

Container

MikroTik’s Container feature allows you to run Linux containerized environments directly on RouterOS v7.x devices. Containers are compatible with images from Docker Hub, Google Container Registry (GCR), Quay, and other providers, enabling you to run familiar containerized applications on your MikroTik hardware.

The Container feature in RouterOS provides a lightweight virtualization solution that allows you to run containerized applications alongside your router’s core networking functions. Unlike full virtual machines, containers share the host operating system’s kernel while maintaining isolated user spaces for each application.

  • Run standard Docker-compatible container images on RouterOS
  • Isolate applications in their own environments
  • Configure networking for containers with flexible topologies
  • Mount external storage for persistent data
  • Set environment variables and startup parameters
  • Configure auto-restart and boot persistence
  • Access container shells for debugging and management

Containers in RouterOS operate through several interconnected components:

ComponentPurpose
Container packageCore software enabling container functionality
veth interfacesVirtual ethernet connections between containers and router network
Container configRegistry and resource management settings
Environment variablesConfiguration passed to containerized applications
MountsPersistent storage mappings between router and containers

Before using containers, ensure your environment meets these requirements:

RequirementDetails
Architecturearm, arm64, or x86
RouterOS versionRouterOS v7.4 or later
StorageExternal disk (USB, SATA, NVMe) recommended
Disk performance100MB/s sequential read/write, 10K random IOPS

The Container package must be installed on your RouterOS device. Devices with EN7562CT CPU (such as hEX Refresh) only support arm32v5 images, limiting available container options.

Containers require enabling Device Mode on your RouterOS device:

/system/device-mode/update container=yes

After executing this command, you must confirm the change by pressing the reset button or performing a cold reboot (for x86 devices).

The global container configuration controls registry access and resource limits:

/container/config/
PropertyDescriptionDefault
registry-urlExternal registry for downloading imageshttps://lscr.io/
tmpdirTemporary extraction directory-
memory-highRAM usage soft limit in bytesunlimited
usernameRegistry authentication username (RouterOS v7.8+)-
passwordRegistry authentication password (RouterOS v7.8+)-
/container/config/set registry-url=https://registry-1.docker.io tmpdir=disk1/tmp
/container/config/set memory-high=512M

Individual containers are configured under /container:

/container/print
PropertyDescription
auto-restart-intervalInterval for automatic restart on failure (e.g., ”10s”)
cmdDefault executable for container
commentShort description for identification
dnsCustom DNS servers for container
domain-nameDNS domain name
entrypointExecutable to run at container startup (e.g., “/bin/sh”)
envlistEnvironment variables list reference
filePath to imported tar.gz container image
hostnameContainer hostname for identification
interfaceveth interface for network connectivity
loggingEnable container output logging to RouterOS log
start-on-bootAuto-start container on device boot
mountsVolume mount references
remote-imageImage name from registry
root-dirContainer storage location
stop-signalLinux signal for graceful shutdown (default: 15)
workdirWorking directory for container process
devicesPhysical device passthrough
cpu-listCPU core affinity
userUser and group for container process
memory-highRAM usage hard limit

There are three methods to add container images to RouterOS.

Download images directly from Docker Hub or other registries:

/container/config/set registry-url=https://registry-1.docker.io tmpdir=disk1/tmp
/container/add remote-image=pihole/pihole interface=veth1 root-dir=disk1/images/pihole name=pihole

Import pre-downloaded container archives:

/container/add file=disk1/pihole.tar interface=veth1 root-dir=disk1/images/pihole name=pihole

Build and import custom containers using Podman or Docker:

Terminal window
podman pull --arch=arm64 docker.io/pihole/pihole
podman save pihole > pihole.tar

Upload to router and import:

/container/add file=disk1/pihole.tar interface=veth1 root-dir=disk1/images/pihole name=pihole
/container/start pihole
/container/stop pihole
/container/restart pihole
/container/print detail

Execute commands inside a running container:

/container/shell pihole

With specific user and command:

/container/shell nextcloud user=www-data cmd="php /var/www/html/cron.php" no-sh
/container/set pihole logging=yes start-on-boot=yes

Configure container behavior through environment variables:

/container/envs/add list=ENV_PIHOLE key=TZ value="Europe/Riga"
/container/envs/add list=ENV_PIHOLE key=FTLCONF_webserver_api_password value="mysecurepassword"

Environment variables are organized in named lists:

/container/envs/print

Mount router storage into containers for persistent data:

/container/mounts/add name=MOUNT_PIHOLE src=disk1/volumes/pihole dst=/etc/pihole

Multiple mounts can be referenced when creating containers:

/container/add remote-image=pihole/pihole interface=veth1 root-dir=disk1/images/pihole mounts=MOUNT_PIHOLE,MOUNT_PIHOLE_DNSMASQ envlist=ENV_PIHOLE name=pihole

Containers connect to the router network through veth (virtual ethernet) interfaces.

/interface/veth/add name=veth1 address=172.17.0.2/24 gateway=172.17.0.1

Multiple addresses including IPv6:

/interface/veth/add address=172.17.0.3/16,fd8d:5ad2:24:2::2/64 gateway=172.17.0.1 gateway6=fd8d:5ad2:24:2::1 name=veth2

All containers share a single veth interface with NAT for outbound traffic:

/interface/veth/add name=veth1 address=172.17.0.2/24 gateway=172.17.0.1
/interface/bridge/add name=containers
/ip/address/add address=172.17.0.1/24 interface=containers
/interface/bridge/port/add bridge=containers interface=veth1
/ip/firewall/nat/add chain=srcnat action=masquerade src-address=172.17.0.0/24
/ip/firewall/nat/add action=dstnat chain=dstnat dst-address=192.168.88.1 dst-port=80 protocol=tcp to-addresses=172.17.0.2 to-ports=80

Create separate network segments for different container groups:

/interface/veth/add name=veth1 address=172.17.0.2/24 gateway=172.17.0.1
/interface/veth/add name=veth2 address=172.18.0.2/24 gateway=172.18.0.1
/interface/bridge/add name=containers1
/interface/bridge/add name=containers2
/ip/address/add address=172.17.0.1/24 interface=containers1
/ip/address/add address=172.18.0.1/24 interface=containers2
/interface/bridge/port/add bridge=containers1 interface=veth1
/interface/bridge/port/add bridge=containers2 interface=veth2
/ip/firewall/nat/add chain=srcnat action=masquerade src-address=172.17.0.0/24
/ip/firewall/nat/add chain=srcnat action=masquerade src-address=172.18.0.0/24

Attach containers directly to your Layer2 network:

/interface/veth/add name=veth1 address=192.168.88.2/24 gateway=192.168.88.1
/interface/bridge/port/add bridge=bridge interface=veth1

This exposes all container ports to the network. Only use when required for broadcast-based service discovery.

Enable dual-stack networking for containers:

/ip/address/add address=172.17.0.1/24 interface=containers
/ip/firewall/nat/add action=masquerade chain=srcnat src-address=172.17.0.0/24
/ipv6/address/add address=fd8d:5ad2:24:2::1 interface=containers
/ipv6/firewall/nat/add action=masquerade chain=srcnat src-address=fd8d:5ad2:24:2::/64
/interface/veth/add address=172.17.0.2/24,fd8d:5ad2:24:2::2/64 gateway=172.17.0.1 gateway6=fd8d:5ad2:24:2::1 name=veth1

Complete example for running a Pi-hole ad-blocking container:

  1. RouterOS v7.4+ with Container package installed
  2. External storage device (HDD, SSD, or USB drive)
  3. Device mode enabled
/system/device-mode/update container=yes
/interface/veth/add name=veth1 address=172.17.0.2/24 gateway=172.17.0.1
/interface/bridge/add name=containers
/ip/address/add address=172.17.0.1/24 interface=containers
/interface/bridge/port/add bridge=containers interface=veth1
/ip/firewall/nat/add chain=srcnat action=masquerade src-address=172.17.0.0/24
/ip/firewall/nat/add action=dstnat chain=dstnat dst-address=192.168.88.1 dst-port=80 protocol=tcp to-addresses=172.17.0.2 to-ports=80
/container/envs/add list=ENV_PIHOLE key=TZ value="Europe/Riga"
/container/envs/add list=ENV_PIHOLE key=FTLCONF_webserver_api_password value="mysecurepassword"
/container/envs/add list=ENV_PIHOLE key=DNSMASQ_USER value="root"
/container/mounts/add name=MOUNT_PIHOLE_PIHOLE src=disk1/volumes/pihole/pihole dst=/etc/pihole
/container/mounts/add name=MOUNT_PIHOLE_DNSMASQD src=disk1/volumes/pihole/dnsmasq.d dst=/etc/dnsmasq.d
/container/config/set registry-url=https://registry-1.docker.io tmpdir=disk1/tmp
/container/add remote-image=pihole/pihole interface=veth1 root-dir=disk1/images/pihole mounts=MOUNT_PIHOLE_PIHOLE,MOUNT_PIHOLE_DNSMASQD envlist=ENV_PIHOLE name=pihole logging=yes start-on-boot=yes
/container/start pihole

Access the Pi-hole web interface at http://192.168.88.1/admin/

Containers consume significant disk space. Always use external storage (USB, SATA, NVMe) for container images and data:

/container/add remote-image=pihole/pihole root-dir=disk1/images/pihole ...

Control container RAM usage to prevent resource exhaustion:

/container/config/set memory-high=200M

Ensure containers start automatically after router reboot:

/container/set pihole start-on-boot=yes

Enable logging to debug container issues:

/container/set pihole logging=yes
/log print

Some containers require elevated privileges:

/container/set pihole user=0:0

Pass physical devices to containers:

/container/set pihole devices="/dev/kvm,/dev/net/tun"
  • Use trusted container images from verified publishers
  • Regularly update container images to patch vulnerabilities
  • Isolate containers in separate network segments
  • Use firewall rules to restrict container network access
  • Avoid running containers with root privileges when possible
  • Monitor container resource usage and activity
  • Consider using read-only mounts where applicable