Skip to content

MPLS L3VPN

MPLS Layer 3 VPN (L3VPN) provides isolated routed connectivity between customer sites over a shared MPLS infrastructure. Each customer receives a dedicated Virtual Routing and Forwarding (VRF) instance on every Provider Edge (PE) router, keeping customer routes completely separate from the provider backbone and from other customers. Customer prefixes are distributed between PE routers using MP-BGP with the VPNv4 address family, carried across the MPLS core using a two-label stack.

RouterOS v7 implements RFC 4364 MPLS L3VPN. The solution requires LDP (or another label distribution mechanism) running in the core alongside MP-BGP between PE routers. Provider (P) routers in the core require no VPN awareness — they only perform label switching on the outer transport label.

  • RouterOS v7 on all PE and P routers
  • IGP (OSPF or IS-IS) providing reachability between PE loopback addresses
  • MPLS enabled and LDP running on all core-facing interfaces
  • PE loopback addresses advertised into the IGP
  • BGP AS number assigned to the provider network
RoleFunction
CE (Customer Edge)Customer router. Connects to the PE via a customer-facing interface. Has no MPLS awareness.
PE (Provider Edge)Provider router at the edge. Maintains per-customer VRFs, runs MP-BGP VPNv4 with other PEs.
P (Provider)Core provider router. Performs label switching only. No VRF state.

A VRF is an isolated routing table on a PE router. Each customer site connected to a PE is associated with a VRF. Interfaces assigned to a VRF participate only in that VRF’s routing table. Multiple customers can use overlapping address space because their VRFs are completely independent.

VRFs in RouterOS are created under /ip vrf, while the VPN-specific attributes are configured separately:

  • A route distinguisher (RD) is configured under /routing bgp vpn and makes VPN routes globally unique when advertised in MP-BGP
  • Import and export route targets (RT) are attached and matched with BGP route filters using bgp-ext-communities rt:...

The RD is prepended to a customer prefix to produce a unique VPNv4 prefix (RD:IP/prefix). This prevents address conflicts when multiple customers use overlapping addresses. The RD is locally significant to the originating PE and does not control route distribution.

Format: AS:nn (e.g. 65000:1) or IP:nn (e.g. 10.0.0.1:1). Each PE should use a unique RD per VRF.

Route targets are BGP extended communities that control which VPN routes are imported into which VRFs:

  • Export RT: attached to routes when they are advertised into MP-BGP
  • Import RT: matched in VPN import policy so the correct routes are installed into a VRF

A simple full-mesh topology uses the same RT for both import and export across all PEs. More complex topologies (hub-and-spoke, extranet) use different RT values.

PE routers exchange customer routes using BGP with the VPNv4 address family (AFI 1, SAFI 128). Each VPNv4 route carries the full VPNv4 prefix (RD:IP/prefix), the VPN label (assigned by the egress PE), and the export route target communities. Route reflectors are used in larger deployments to avoid full-mesh BGP peering.

An L3VPN packet carries two MPLS labels:

LabelAssigned byPurpose
Outer (transport)LDP/RSVP-TERoutes the packet across the MPLS core to the egress PE
Inner (VPN)Egress PE via MP-BGPIdentifies the destination VRF on the egress PE

The ingress PE imposes both labels. P routers swap only the outer label. The penultimate P router pops the outer label (PHP — Penultimate Hop Popping), so the egress PE receives the packet with only the VPN label and forwards it into the correct VRF.

This guide uses the following topology throughout all examples:

CE-A ─── PE1 ─── P ─── PE2 ─── CE-B
(10.0.0.1) (10.0.0.2)
└──── RR (10.0.0.10) ────┘
(iBGP route reflector)
DeviceLoopbackASRole
PE110.0.0.1/3265000Provider Edge
PE210.0.0.2/3265000Provider Edge
RR10.0.0.10/3265000BGP Route Reflector
P10.0.0.254/32Core label-switching
CE-A192.168.1.165001Customer A site 1
CE-B192.168.2.165001Customer A site 2

Customer A uses VRF CUST-A with RD unique per PE and route target 65000:100.

Step 1 — Enable MPLS and LDP on core interfaces

Section titled “Step 1 — Enable MPLS and LDP on core interfaces”

Configure MPLS forwarding and LDP on all PE and P routers. The LDP transport address should be set to the loopback to keep sessions stable if a core link fails.

PE1:

# Enable MPLS on core-facing interfaces
/mpls interface
add interface=ether1
add interface=ether2
# Create and enable the LDP instance with loopback as transport address
/mpls ldp
add afi=ip lsr-id=10.0.0.1 transport-addresses=10.0.0.1
set 0 disabled=no
# Enable LDP on core-facing interfaces
/mpls ldp interface
add interface=ether1
add interface=ether2

PE2 (same pattern, loopback 10.0.0.2):

/mpls interface
add interface=ether1
add interface=ether2
/mpls ldp
add afi=ip lsr-id=10.0.0.2 transport-addresses=10.0.0.2
set 0 disabled=no
/mpls ldp interface
add interface=ether1
add interface=ether2

Verify LDP sessions form between PE1, P, and PE2 before proceeding:

/mpls ldp neighbor print

Create a VRF on each PE for each customer, then bind VPN settings to that VRF. The RD must be unique per PE per VRF. Route targets are applied later with BGP route filters.

PE1 — create VRF for Customer A:

/ip vrf
add name=CUST-A \
interfaces=ether10
/routing bgp vpn
add vrf=CUST-A \
route-distinguisher=65000:1 \
label-allocation-policy=per-vrf

PE2 — create VRF for Customer A (different RD, same RT):

/ip vrf
add name=CUST-A \
interfaces=ether10
/routing bgp vpn
add vrf=CUST-A \
route-distinguisher=65000:2 \
label-allocation-policy=per-vrf

Verify the VRF is active:

/ip vrf print detail
/routing bgp vpn print detail

Apply route targets with BGP route filters. For a simple full-mesh customer VPN, each PE exports RT 65000:100 on that customer’s routes and accepts routes carrying RT 65000:100 into the matching VRF.

The PE needs to learn customer prefixes in the VRF. Choose the method based on your CE equipment and requirements.

For simple CE devices that cannot run BGP or OSPF, add static routes pointing into the VRF:

# On PE1 — CE-A LAN behind ether10
/ip route
add dst-address=192.168.1.0/24 gateway=ether10 routing-table=CUST-A
# On PE2 — CE-B LAN behind ether10
/ip route
add dst-address=192.168.2.0/24 gateway=ether10 routing-table=CUST-A

Verify the route appears in the VRF routing table:

/ip route print where routing-table=CUST-A

BGP CE-PE is the most flexible CE-PE method. The CE runs eBGP to the PE using the customer AS, and the PE peers into the VRF routing table.

On PE1 — BGP connection to CE-A, inside the VRF:

/routing bgp template
add name=ce-pe-template as=65000 address-families=ip
/routing bgp connection
add name=to-ce-a \
remote.address=192.168.1.1 remote.as=65001 \
local.role=ebgp \
templates=ce-pe-template

On CE-A — eBGP to PE1:

/routing bgp connection
add name=to-pe1 \
remote.address=192.168.0.1 remote.as=65000 \
local.role=ebgp

Verify the CE-PE session and received routes:

/routing bgp session print where name=to-ce-a
/ip route print where routing-table=CUST-A

Step 4 — MP-BGP VPNv4 between PE routers

Section titled “Step 4 — MP-BGP VPNv4 between PE routers”

PE routers exchange VRF routes using iBGP with the vpnv4 address family. For larger deployments use a route reflector. For small labs two PEs can peer directly.

On RR (10.0.0.10):

/routing bgp template
add name=rr-client-template as=65000 router-id=10.0.0.10 \
address-families=vpnv4,ip
/routing bgp connection
add name=to-pe1 \
remote.address=10.0.0.1 remote.as=65000 \
local.role=ibgp-rr \
templates=rr-client-template
add name=to-pe2 \
remote.address=10.0.0.2 remote.as=65000 \
local.role=ibgp-rr \
templates=rr-client-template

On PE1:

/routing bgp template
add name=ibgp-vpn as=65000 router-id=10.0.0.1 \
address-families=vpnv4,ip
/routing bgp connection
add name=to-rr \
remote.address=10.0.0.10 remote.as=65000 \
local.role=ibgp-rr-client \
templates=ibgp-vpn

On PE2:

/routing bgp template
add name=ibgp-vpn as=65000 router-id=10.0.0.2 \
address-families=vpnv4,ip
/routing bgp connection
add name=to-rr \
remote.address=10.0.0.10 remote.as=65000 \
local.role=ibgp-rr-client \
templates=ibgp-vpn
# On PE1
/routing bgp connection
add name=to-pe2 \
remote.address=10.0.0.2 remote.as=65000 \
local.role=ibgp \
templates=ibgp-vpn
# On PE2
/routing bgp connection
add name=to-pe1 \
remote.address=10.0.0.1 remote.as=65000 \
local.role=ibgp \
templates=ibgp-vpn

When CE-A (192.168.1.0/24) sends a packet to CE-B (192.168.2.0/24):

  1. CE-A sends the packet to PE1 as the next hop.
  2. PE1 (ingress PE) looks up 192.168.2.0/24 in VRF CUST-A. Finds a VPNv4 route learned from PE2 with VPN label L_vpn and next-hop 10.0.0.2. LDP provides transport label L_ldp to reach 10.0.0.2. PE1 imposes both labels: outer L_ldp, inner L_vpn.
  3. P routers swap only the outer label L_ldp as the packet traverses the core.
  4. Penultimate P router pops the outer label (PHP — Penultimate Hop Popping), forwarding the packet to PE2 with only the VPN label L_vpn.
  5. PE2 (egress PE) receives the packet, looks up VPN label L_vpn, identifies VRF CUST-A, pops the VPN label, and forwards the plain IP packet out ether10 to CE-B.
  6. CE-B receives the original IP packet.

P routers have no knowledge of customer VRFs or VPN labels — they perform only label-switching operations on the transport label.

# Check VRF configuration and interface assignment
/ip vrf print detail
# Verify routes in customer VRF
/ip route print where routing-table=CUST-A
# Check VPN routes in the routing table
/routing bgp vpn print detail
# Check BGP sessions
/routing bgp session print
# Inspect MPLS forwarding table
/mpls forwarding-table print
# Check LDP neighbor sessions (label bindings visible via forwarding-table)
/mpls forwarding-table print where type=ldp
# Verify LDP neighbor sessions
/mpls ldp neighbor print

Check that the BGP session is established and the VPNv4 address family is negotiated:

/routing bgp session print detail where name=to-rr

If the session is up but no VPNv4 routes are present, confirm the VRF route targets match on both PEs:

/ip vrf print detail

The export RT appended on PE1 must match the RT accepted by the import policy on PE2 for routes to flow.

Traffic reaches the egress PE but is not forwarded to CE

Section titled “Traffic reaches the egress PE but is not forwarded to CE”

Verify the VPN label is present in the forwarding table and the VRF routing table has the destination:

/mpls forwarding-table print
/ip route print where routing-table=CUST-A

Also confirm the CE-facing interface is assigned to the correct VRF:

/ip vrf print detail

LDP sessions require IP reachability between transport addresses. Verify the loopback is reachable and the IGP is advertising it:

/ping 10.0.0.2 routing-table=main
/mpls ldp neighbor print

Confirm MPLS is enabled on the core interfaces and that a forwarding entry exists for the PE loopback:

/mpls interface print
/mpls forwarding-table print where dst-address=10.0.0.2/32

Route leaking lets routes from one VRF be visible in another VRF on the same or a remote PE. This is controlled entirely by route targets in BGP policy — there is no separate mechanism. A VRF imports any VPN route whose RT matches the values accepted by its import filter.

A common use case is a shared services VRF (DNS, NTP, management) that every customer VRF can reach, but customers cannot reach each other.

RT plan:

VRFExport RTsImport RTs
CUST-A65000:10065000:100, 65000:999
CUST-B65000:20065000:200, 65000:999
SHARED65000:999(none — shared services do not need to reach customers)

PE1 — create the VRFs and assign unique RDs:

/ip vrf
add name=CUST-A \
interfaces=ether10
add name=CUST-B \
interfaces=ether11
add name=SHARED \
interfaces=ether12
/routing bgp vpn
add vrf=CUST-A route-distinguisher=65000:1 label-allocation-policy=per-vrf
add vrf=CUST-B route-distinguisher=65000:3 label-allocation-policy=per-vrf
add vrf=SHARED route-distinguisher=65000:5 label-allocation-policy=per-vrf

Then apply route filters so CUST-A exports 65000:100 and imports 65000:100,65000:999, CUST-B exports 65000:200 and imports 65000:200,65000:999, and SHARED exports 65000:999. With that policy, CUST-A and CUST-B each import the SHARED VRF’s routes, but neither imports the other customer’s routes.

In a hub-and-spoke L3VPN all inter-spoke traffic is forced through a hub site. Spokes do not import each other’s routes directly.

RT plan:

VRFExport RTsImport RTs
HUB65000:10065000:101, 65000:102
SPOKE165000:10165000:100
SPOKE265000:10265000:100
/ip vrf
add name=HUB \
interfaces=ether10
add name=SPOKE1 \
interfaces=ether11
add name=SPOKE2 \
interfaces=ether12
/routing bgp vpn
add vrf=HUB route-distinguisher=65000:10 label-allocation-policy=per-vrf
add vrf=SPOKE1 route-distinguisher=65000:11 label-allocation-policy=per-vrf
add vrf=SPOKE2 route-distinguisher=65000:12 label-allocation-policy=per-vrf

Spoke1 and Spoke2 each import only the Hub’s routes (RT 65000:100). The Hub imports both spoke RTs. Apply that policy with route filters, then traffic between Spoke1 and Spoke2 is routed through the Hub site.

After configuring asymmetric RTs, confirm the expected routes appear in each VRF’s routing table:

# Routes visible in CUST-A (should include SHARED prefixes)
/ip route print where routing-table=CUST-A
# Routes visible in SHARED (should not include customer prefixes)
/ip route print where routing-table=SHARED
# VPN definitions and labels
/routing bgp vpn print detail