Skip to content

VETH

Virtual Ethernet (VETH) interfaces provide network connectivity for containers in RouterOS. Each VETH interface creates a virtual network connection between RouterOS and a container, enabling the container to communicate with other interfaces, networks, and the internet through standard routing and bridging configurations.

VETH interfaces behave like physical Ethernet interfaces in RouterOS, supporting static IP addressing, DHCP client for automatic address assignment, IPv6 SLAAC, and full integration with RouterOS networking features including bridges, firewalls, and routing protocols.

The VETH feature creates paired virtual network interfaces—one end connects to RouterOS, and the other end connects inside the container namespace. This paired architecture means traffic entering RouterOS through one VETH endpoint exits directly into the container, and vice versa, with both endpoints sharing the same Layer 2 broadcast domain until further configuration separates them.

Each container requires at least one VETH interface for network connectivity. Multiple VETH interfaces can be assigned to a single container, enabling complex network topologies such as front-end/back-end architectures, multi-network isolation, or dedicated management networks. Containers can communicate with RouterOS directly through their VETH interfaces, access other containers on shared networks, reach external networks through NAT or routing, and participate in broadcast-based protocols like LLDP and CDP.

VETH interfaces support all standard IP addressing methods available to physical interfaces in RouterOS. Static addressing assigns specific IPv4 and IPv6 addresses along with gateway configuration, suitable for predictable network environments requiring fixed addressing. DHCP client mode enables automatic address acquisition from existing DHCP servers, simplifying deployment in networks with existing DHCP infrastructure. IPv6 Stateless Address Auto Configuration (SLAAC) allows containers to automatically generate IPv6 addresses based on router advertisements, providing plug-and-play IPv6 connectivity.

Beyond basic addressing, VETH interfaces participate fully in RouterOS Layer 2 and Layer 3 networking. They can be added to bridge interfaces for Layer 2 connectivity, assigned to VRF instances for network segmentation, referenced in firewall rules for traffic filtering, participate in routing protocols for dynamic route exchange, and be monitored through standard interface statistics and monitoring tools.

Using VETH interfaces requires the Container package installed on RouterOS v7.x. Device mode must be enabled with container support, which requires a router reboot to activate. The router must have sufficient resources to support the number of VETH interfaces planned, with each interface consuming a small amount of memory and processing overhead.

For network connectivity, containers using VETH interfaces require proper IP addressing configuration and, for external network access, either NAT configuration or routing infrastructure. The router must have available IP addresses in the addressing scheme used for containers, whether through DHCP reservation, static assignment from a dedicated pool, or address space allocated specifically for container networking.

VETH interfaces require RouterOS v7.4 or later. Earlier versions lack the virtual Ethernet interface capability essential for container networking. When upgrading to enable container functionality, note that Device Mode changes require a cold reboot—the router must be powered off and back on, not just restarted through software, for the change to take effect.

Each VETH interface consumes a minimal amount of RAM for interface structures and statistics tracking. The practical limit on VETH interface count depends on total router memory and other services running simultaneously. For typical deployments, routers with 256MB or more RAM can support dozens of VETH interfaces without performance impact. Resource-constrained devices like hAP ac^2 or similar entry-level devices should limit VETH count based on observed memory usage during testing.

VETH interfaces are configured under the /interface/veth menu path. Creating a VETH interface establishes the virtual interface pair but does not, by itself, provide network connectivity—the interface must be assigned to a container and properly addressed.

/interface/veth/add name=veth1

This command creates a VETH interface named veth1 with no IP configuration. The interface exists but provides no connectivity until addresses are assigned and the interface connects to a container.

For automatic address assignment from an existing DHCP server:

/interface/veth/add name=veth1 dhcp=yes

The DHCP client requests IPv4 addresses from the DHCP server. The assigned address, gateway, and DNS servers are automatically configured on the interface. This mode suits networks with existing DHCP infrastructure where containers should obtain addresses dynamically.

For fixed addressing in networks without DHCP:

/interface/veth/add name=veth1 address=172.17.0.2/24 gateway=172.17.0.1

This assigns the IPv4 address 172.17.0.2 with a 24-bit subnet mask to the VETH interface, with 172.17.0.1 configured as the default gateway. The container connected to this VETH interface will use these addresses for network communication.

For IPv6-only or dual-stack configurations:

/interface/veth/add name=veth1 address=fd8d:5ad2:24:2::2/64 gateway6=fd8d:5ad2:24:2::1

This assigns the IPv6 address fd8d:5ad2:24:2::2 with a 64-bit prefix length, using fd8d:5ad2:24:2::1 as the IPv6 gateway. IPv6 connectivity requires router advertisements on the connected network segment or explicit gateway configuration.

For full IPv4 and IPv6 connectivity:

/interface/veth/add name=veth1 address=172.17.0.2/24,fd8d:5ad2:24:2::2/64 gateway=172.17.0.1 gateway6=fd8d:5ad2:24:2::1

This configuration provides both IPv4 and IPv6 addresses on the same VETH interface, enabling the container to communicate using either protocol. Both addresses can be used simultaneously for network connectivity.

The following properties configure VETH interface behavior. All properties are set at interface creation time and can be modified afterward using the /interface/veth/set command.

PropertyTypeDescriptionDefault
namestringInterface name for identification and assignmentauto-generated
addressaddressIPv4 or IPv6 address with CIDR notationnone
gatewayIPv4 addressDefault IPv4 gatewaynone
gateway6IPv6 addressDefault IPv6 gatewaynone
dhcpyes/noEnable DHCP client for automatic addressingno
macMAC addressRouter-side MAC addressauto-generated
container-mac-addressMAC addressContainer-side MAC addressauto-generated

By default, RouterOS generates MAC addresses for VETH interfaces automatically. The router-side MAC address uses a locally administered address space to avoid conflicts with manufacturer-assigned addresses on physical hardware. The container-side MAC address is similarly auto-generated.

For specific network requirements, explicit MAC addresses can be assigned:

/interface/veth/add name=veth1 address=172.17.0.2/24 mac=AA:BB:CC:DD:EE:01 container-mac-address=AA:BB:CC:DD:EE:02

MAC address configuration proves useful when network infrastructure depends on specific MAC address patterns, when implementing MAC-based firewall rules or authentication, or when duplicating addresses from legacy systems during migration.

After creating a VETH interface, it must be assigned to a container for network connectivity. The container references the VETH interface by name, and RouterOS connects the container namespace to the router-side of the VETH pair.

/interface/veth/add name=veth1 address=172.17.0.2/24 gateway=172.17.0.1
/container/add remote-image=nginx:latest interface=veth1 root-dir=disk1/images/nginx name=webserver

In this configuration, the nginx container receives network connectivity through veth1. The container obtains the IP address 172.17.0.2 and routes traffic through the gateway at 172.17.0.1, which is typically a router interface or bridge IP address.

Complex container deployments may require multiple network interfaces:

/interface/veth/add name=veth-frontend address=10.0.1.10/24 gateway=10.0.1.1
/interface/veth/add name=veth-backend address=10.0.2.10/24 gateway=10.0.2.1
/interface/veth/add name=veth-management address=10.0.100.10/24
/container/add remote-image=myapp:latest interface=veth-frontend,veth-backend,veth-management root-dir=disk1/images/myapp name=myapp

This configuration assigns three VETH interfaces to a single container, enabling network segmentation for front-end traffic, back-end database connectivity, and out-of-band management access.

VETH interfaces support various network topologies depending on the desired container connectivity model. The appropriate topology depends on factors including container isolation requirements, existing network infrastructure, external access needs, and security considerations.

The most common topology places VETH interfaces on a dedicated bridge, providing Layer 2 connectivity between containers and enabling NAT for external access. This model mimics traditional Docker networking and provides straightforward container internet access.

/interface/veth/add name=veth1 address=172.17.0.2/24 gateway=172.17.0.1
/interface/bridge/add name=containers
/ip/address/add address=172.17.0.1/24 interface=containers
/interface/bridge/port/add bridge=containers interface=veth1
/ip/firewall/nat/add chain=srcnat action=masquerade src-address=172.17.0.0/24

The bridge creates a Layer 2 segment containing all connected VETH interfaces. The router’s IP address on the bridge serves as the default gateway for containers. Masquerading enables containers to access external networks through source NAT, making all container traffic appear to originate from the router’s external IP address.

For inbound access to container services, destination NAT forwards traffic to specific containers:

/ip/firewall/nat/add action=dstnat chain=dstnat dst-address=192.168.88.1 dst-port=8080 protocol=tcp to-addresses=172.17.0.2 to-ports=80

This rule forwards TCP traffic arriving at the router on port 8080 to port 80 on the container at 172.17.0.2.

For enhanced isolation, each container or container group can use a separate bridge and address space:

/interface/veth/add name=veth1 address=172.17.0.2/24 gateway=172.17.0.1
/interface/veth/add name=veth2 address=172.18.0.2/24 gateway=172.18.0.1
/interface/bridge/add name=containers-web
/interface/bridge/add name=containers-db
/ip/address/add address=172.17.0.1/24 interface=containers-web
/ip/address/add address=172.18.0.1/24 interface=containers-db
/interface/bridge/port/add bridge=containers-web interface=veth1
/interface/bridge/port/add bridge=containers-db interface=veth2
/ip/firewall/nat/add chain=srcnat action=masquerade src-address=172.17.0.0/24
/ip/firewall/nat/add chain=srcnat action=masquerade src-address=172.18.0.0/24

Containers on veth1 cannot communicate directly with containers on veth2 without explicit routing or firewall rules, providing network isolation between container groups.

For containers requiring direct Layer 2 network access, VETH interfaces can connect directly to existing bridges without separate address configuration:

/interface/veth/add name=veth1 address=192.168.88.10/24
/interface/bridge/port/add bridge=bridge-lan interface=veth1

The container receives its address from the existing DHCP server on the bridged network or uses static addressing from the LAN address space. This mode exposes the container to the entire Layer 2 domain, enabling broadcast-based protocols and direct LAN connectivity.

Layer 2 bridge mode exposes containers to all traffic on the bridged network and should only be used when required for specific protocols or network integration needs.

For network segmentation using Virtual Routing and Forwarding instances, VETH interfaces can be assigned to VRFs:

/interface/veth/add name=veth1 address=10.1.1.2/24 gateway=10.1.1.1
/ip/vrf/add name=container-vrf interfaces=veth1

Containers connected to this VETH interface participate in the VRF’s routing table, isolating their traffic from other router interfaces. This topology suits multi-tenant deployments or strict network segmentation requirements.

Monitor VETH interfaces using standard RouterOS interface commands:

/interface/veth/print detail
/interface/veth/print stats

The detailed output shows addressing configuration, MAC addresses, and connection status. Statistics display traffic counters including bytes sent and received, packet counts, and errors.

Verify container connectivity from the router:

/ping 172.17.0.2

Test routing and firewall rules:

/tool/firewall-test dst-address=172.17.0.2

Container-side connectivity can be verified through the container shell:

/container/shell webserver

Once inside the container shell, standard network diagnostic tools reveal the container’s network state:

Terminal window
ip addr show
ip route show
ping 172.17.0.1

VETH connectivity issues typically stem from addressing misconfiguration, missing gateway routes, bridge membership problems, or firewall blocking.

When containers cannot reach external networks, verify masquerading is configured on the correct source address range and that the NAT rule appears in /ip/firewall/nat print. Confirm the container’s default route points to a valid gateway address reachable through the VETH interface.

When containers cannot communicate with each other, verify both VETH interfaces connect to the same bridge or routing domain. Check /interface/bridge/port print to confirm bridge membership. Use /tool/ping from the router to verify reachability between VETH interface addresses.

When containers cannot reach the router or vice versa, verify the VETH interface is enabled and has a valid address. Check /interface/veth/print for the interface status. Verify no firewall rules in the filter table block traffic between the router and the container IP address.

/interface/veth/print
/ip/address/print where interface=veth1
/ip/route/print where dst-address=0.0.0.0/0
/interface/bridge/port/print where interface=veth1

These commands reveal the complete VETH configuration, address assignment, routing, and bridge membership, enabling systematic troubleshooting of connectivity issues.

For issues specific to container network configuration, access the container and examine its network state:

/container/shell webserver

Inside the container, verify the network interface exists, addresses are correctly assigned, and routing is functional:

Terminal window
ip link show
ip addr
ip route
cat /etc/resolv.conf
ping 8.8.8.8

Container-side DNS configuration depends on RouterOS DNS settings or explicit DNS assignment through the container’s DHCP client or static configuration.

A complete configuration for running an nginx web server container:

/interface/veth/add name=veth-web address=172.17.0.10/24 gateway=172.17.0.1
/interface/bridge/add name=containers
/ip/address/add address=172.17.0.1/24 interface=containers
/interface/bridge/port/add bridge=containers interface=veth-web
/ip/firewall/nat/add chain=srcnat action=masquerade src-address=172.17.0.0/24
/ip/firewall/nat/add action=dstnat chain=dstnat dst-address=192.168.88.1 dst-port=80 protocol=tcp to-addresses=172.17.0.10 to-ports=80
/container/config/set registry-url=https://registry-1.docker.io tmpdir=disk1/tmp
/container/add remote-image=nginx:latest interface=veth-web root-dir=disk1/images/nginx name=webserver
/container/start webserver

The nginx container is accessible at http://192.168.88.1, with traffic forwarded to the container on port 80.

A three-tier application with web, application, and database containers:

/interface/veth/add name=veth-web address=10.0.1.10/24 gateway=10.0.1.1
/interface/veth/add name=veth-app address=10.0.2.10/24 gateway=10.0.2.1
/interface/veth/add name=veth-db address=10.0.3.10/24 gateway=10.0.3.1
/interface/bridge/add name=bridge-web
/interface/bridge/add name=bridge-app
/interface/bridge/add name=bridge-db
/ip/address/add address=10.0.1.1/24 interface=bridge-web
/ip/address/add address=10.0.2.1/24 interface=bridge-app
/ip/address/add address=10.0.3.1/24 interface=bridge-db
/interface/bridge/port/add bridge=bridge-web interface=veth-web
/interface/bridge/port/add bridge=bridge-app interface=veth-app
/interface/bridge/port/add bridge=bridge-db interface=veth-db
/ip/firewall/nat/add chain=srcnat action=masquerade src-address=10.0.0.0/16
/container/add remote-image=nginx interface=veth-web root-dir=disk1/images/web name=web
/container/add remote-image=myapp interface=veth-app root-dir=disk1/images/app name=app
/container/add remote-image=postgres interface=veth-db root-dir=disk1/images/db name=db
/container/start web,app,db

This topology isolates each tier on a separate network segment while allowing NAT outbound connectivity. The web tier can reach the app tier, and the app tier can reach the database tier, but direct access from web to database is blocked at Layer 3.

A container using IPv6 exclusively:

/interface/veth/add name=veth1 address=fd8d:5ad2:24:2::10/64 gateway6=fd8d:5ad2:24:2::1
/interface/bridge/add name=containers-ipv6
/ipv6/address/add address=fd8d:5ad2:24:2::1/64 interface=containers-ipv6
/interface/bridge/port/add bridge=containers-ipv6 interface=veth1
/ipv6/firewall/nat/add action=masquerade chain=srcnat src-address=fd8d:5ad2:24:2::/64

The container receives an IPv6 address and routes IPv6 traffic through the router. NAT is typically unnecessary for IPv6, as global addresses enable direct routing, but may be required in specific deployments.

VETH interfaces extend the router’s network into containers, creating additional attack surface that requires protection through standard network security practices.

Container network traffic flows through RouterOS firewalls and can be filtered using the same rules applied to physical interfaces. Create explicit firewall rules limiting container traffic to required destinations and ports. Consider using separate firewall chains for container traffic to maintain clear policy boundaries.

MAC address randomization is not available for VETH interfaces, making container traffic potentially correlatable across network observation points. For environments requiring traffic anonymity, consider additional encryption at the application layer or network-level tunneling.

Containers with Layer 2 bridge mode access can potentially perform MAC address spoofing, ARP poisoning, or other Layer 2 attacks against network infrastructure. Only enable Layer 2 bridge mode when explicitly required, and implement additional network monitoring or segmentation to mitigate risks.

  • Container - Container feature documentation
  • Bridge - Bridge interface configuration
  • NAT - Network address translation
  • VRF - Virtual Routing and Forwarding