Multi-Protocol Label Switching (MPLS)
Multi-Protocol Label Switching (MPLS)
Section titled “Multi-Protocol Label Switching (MPLS)”MPLS is a high-performance packet-forwarding architecture used in ISP backbones and enterprise WANs. Instead of performing a full IP routing lookup at every hop, MPLS routers attach short fixed-length labels to packets at the network edge. Core routers forward packets by swapping labels — a faster, simpler operation than a longest-prefix-match lookup. The IP header is only examined once, at the point where the packet enters the MPLS domain.
RouterOS 7 implements a complete MPLS stack: LDP for automatic LSP creation, RSVP-TE for traffic-engineered tunnels with explicit paths and bandwidth reservations, VPLS for layer-2 VPN services, and L3VPN (VPRN) for isolated customer routing over a shared core.
Label Switching Fundamentals
Section titled “Label Switching Fundamentals”Forwarding Equivalence Classes (FEC)
Section titled “Forwarding Equivalence Classes (FEC)”A Forwarding Equivalence Class (FEC) is a set of packets that receive identical forwarding treatment — same next hop, same label, same path through the network. In the simplest case, one FEC corresponds to one destination IP prefix. All packets destined for that prefix are placed into the same FEC at the network edge and follow the same label-switched path through the core.
Label Switched Paths (LSP)
Section titled “Label Switched Paths (LSP)”A Label Switched Path (LSP) is the sequence of routers a labeled packet traverses from the ingress edge to the egress edge. An LSP is unidirectional: traffic from A→B and B→A follow separate LSPs. LSPs are created either automatically by LDP (following IGP shortest paths) or explicitly by RSVP-TE (following operator-specified paths with optional bandwidth reservations).
The Label Stack
Section titled “The Label Stack”MPLS supports label stacking: a packet can carry multiple labels, each processed by a different layer of the network. Labels are inserted between the Layer 2 header and the IP header (the “shim” position). Each label entry is 4 bytes:
| Bits | Field | Description |
|---|---|---|
| 20 | Label value | The forwarding label (values 0–15 are reserved) |
| 3 | TC (Traffic Class) | Formerly “EXP” — used for QoS/DiffServ marking |
| 1 | S (Bottom of Stack) | Set to 1 on the innermost label |
| 8 | TTL | Time to live, decremented at each hop |
A packet carrying both a transport label (outer, swapped by core LSRs) and a service label (inner, carrying VPN or pseudowire context) has a two-label stack. The outer label determines the path through the core; the inner label determines the service at the egress PE.
Label Operations
Section titled “Label Operations”Every MPLS router performs one of three operations per packet:
| Operation | Where | Action |
|---|---|---|
| Push (Impose) | Ingress LER | Add one or more labels to an unlabeled IP packet |
| Swap | Transit LSR | Replace the top label with a new label and forward |
| Pop (Dispose) | Egress LER or PHP hop | Remove the top label; expose the next label or IP header |
Router Roles
Section titled “Router Roles”| Role | Name | Function |
|---|---|---|
| LER | Label Edge Router | Network boundary; pushes labels on ingress, pops on egress |
| LSR | Label Switch Router | Core router; swaps labels only, never inspects the IP header |
| PHP | Penultimate Hop Popper | The hop before the egress LER pops the outer label, saving the egress one lookup |
Penultimate Hop Popping (PHP) is the default behavior in RouterOS. The penultimate router receives label value 3 (implicit null) as its binding for a given prefix, signaling it to pop rather than swap. The egress LER then receives a plain IP packet (or inner-labeled packet) and applies the final forwarding decision without an extra LFIB lookup.
MTU Considerations
Section titled “MTU Considerations”Each label in a stack adds 4 bytes of overhead. A single-label packet on an Ethernet link with MTU 1500 leaves 1496 bytes for the IP payload. A two-label stack (common for L3VPN and VPLS) reduces IP payload space to 1492 bytes. Configure MPLS MTU on all core interfaces to avoid silent fragmentation:
/mpls interface set [find interface=ether1] mpls-mtu=1508Set mpls-mtu to match the underlying link MTU plus the label overhead your service stack requires.
RouterOS MPLS Services
Section titled “RouterOS MPLS Services”RouterOS supports four primary MPLS service types. Each builds on a common forwarding plane (MPLS interfaces + LFIB) but uses different control-plane protocols and serves different deployment scenarios.
| Service | Protocol | Layer | Use Case |
|---|---|---|---|
| LDP / Basic MPLS | LDP | Transport | Core LSP establishment, foundation for all other services |
| Traffic Engineering | RSVP-TE | Transport | Explicit paths, bandwidth reservation, fast reroute |
| VPLS | LDP or MP-BGP | Layer 2 | Ethernet VPN service between sites |
| L3VPN (VPRN) | MP-BGP + LDP | Layer 3 | Isolated customer routing over shared core |
Enabling MPLS
Section titled “Enabling MPLS”MPLS forwarding is activated per-interface. No global enable flag is required in RouterOS 7 — adding an interface to /mpls interface is sufficient to start label switching on that link.
# Enable MPLS forwarding on core-facing interfaces/mpls interfaceadd interface=ether1add interface=ether2Verify the interface is up and forwarding:
/mpls interface printISP Backbone Design
Section titled “ISP Backbone Design”Network Roles
Section titled “Network Roles”An MPLS ISP backbone separates routers into three roles:
| Role | Name | Responsibilities |
|---|---|---|
| P | Provider (core) | Label switching only — swaps outer transport label, no VPN awareness |
| PE | Provider Edge | Customer-facing — pushes/pops labels, hosts VRFs (L3VPN) or pseudowires (VPLS) |
| CE | Customer Edge | Customer device — connects to PE via IP routing (static, BGP, OSPF) |
P routers require no VPN configuration. They only participate in the IGP and LDP/RSVP-TE. This keeps the core simple and scalable — adding a new VPN customer requires changes only on the PE routers.
Design Layers
Section titled “Design Layers”A well-designed MPLS ISP backbone is built in layers:
┌─────────────────────────────────────────────────────────┐│ Service Layer │ L3VPN (VRF + MP-BGP) │ VPLS │├─────────────────────────────────────────────────────────┤│ Label Layer │ LDP or RSVP-TE LSPs │ Label stack│├─────────────────────────────────────────────────────────┤│ IGP Layer │ OSPF or IS-IS │ Loopbacks │├─────────────────────────────────────────────────────────┤│ IP Layer │ Core links + loopbacks│ Transport │└─────────────────────────────────────────────────────────┘Build bottom-up:
- IP layer — provision core links and stable loopback
/32addresses on every P and PE router. - IGP layer — run OSPF or IS-IS across all core links. Advertise loopback
/32s. Verify full reachability before proceeding. - Label layer — enable MPLS on core interfaces, then start LDP. Verify LSPs form for all loopbacks. Add RSVP-TE for paths requiring bandwidth guarantees or protection.
- Service layer — configure VRFs + MP-BGP for L3VPN, or pseudowires + LDP/BGP signaling for VPLS.
IGP Underlay
Section titled “IGP Underlay”LDP uses the IGP to determine which prefixes to label and which next hop to use. Use a single OSPF area (or IS-IS level) for the MPLS core. Advertise only loopback /32s and core transit subnets — do not leak customer prefixes into the core IGP.
# Core router — OSPF with stable loopback/interface bridge add name=loopback/ip address add address=10.255.0.1/32 interface=loopback
/routing ospf instanceadd name=core router-id=10.255.0.1
/routing ospf areaadd name=backbone area-id=0.0.0.0 instance=core
/routing ospf interface-templateadd interfaces=ether1,ether2 area=backboneadd interfaces=loopback area=backbone passive=yesLDP Transport
Section titled “LDP Transport”Run LDP on core links only — never on CE-facing interfaces. Use loopback addresses as LDP LSR-ID and transport endpoints for stability.
/mpls ldpadd afi=ip lsr-id=10.255.0.1 transport-addresses=10.255.0.1
/mpls ldp interfaceadd interface=ether1add interface=ether2Traffic Engineering Integration
Section titled “Traffic Engineering Integration”Use RSVP-TE for paths requiring:
- Explicit routing — bypass congested links or steer traffic for policy reasons
- Bandwidth guarantees — reserve capacity before traffic flows
- Fast reroute — pre-computed backup paths for sub-50ms failover
LDP and RSVP-TE can coexist on the same core interfaces. LDP provides baseline LSPs; TE tunnels provide engineered paths for specific traffic flows.
Firewall Considerations
Section titled “Firewall Considerations”If RouterOS firewall rules are active on core routers, allow MPLS control protocols:
/ip firewall filter# Allow OSPFadd chain=input protocol=89 action=accept comment="OSPF"# Allow LDP (UDP hello + TCP session)add chain=input protocol=udp dst-port=646 action=accept comment="LDP hello"add chain=input protocol=tcp dst-port=646 action=accept comment="LDP session"# Allow RSVP (if using traffic engineering)add chain=input protocol=46 action=accept comment="RSVP"Place these rules before any default-drop rule on input chain.
Scaling Guidelines
Section titled “Scaling Guidelines”| Routers in Core | Recommendation |
|---|---|
| < 20 | Single OSPF area; LDP with default settings |
| 20–100 | Consider OSPF areas or IS-IS levels; enable LDP label binding filtering to reduce state |
| > 100 | IS-IS preferred; use BGP route reflectors for L3VPN; consider RSVP-TE only for specific TE paths |
Enable label binding filtering on PE routers to suppress unnecessary core-prefix labels — PEs only need labels for other PE loopbacks:
/mpls ldp accept-filteradd prefix=10.255.0.0/24 action=accept # Only accept PE/P loopback labelsTroubleshooting
Section titled “Troubleshooting”MPLS Interface State
Section titled “MPLS Interface State”/mpls interface printVerify each core interface shows running=yes.
LDP Neighbors and Sessions
Section titled “LDP Neighbors and Sessions”/mpls ldp neighbor printEach neighbor should show state=operational. If sessions are missing, check IGP reachability between loopback addresses and firewall rules on port 646.
Label Forwarding Information Base (LFIB)
Section titled “Label Forwarding Information Base (LFIB)”/mpls forwarding-table printEach IGP-learned prefix should have an entry with an incoming label (for transit) or an outgoing label (for egress/PHP). An empty LFIB with LDP running indicates IGP reachability issues.
Label Bindings
Section titled “Label Bindings”/mpls local-label print/mpls remote-bindings printCompare local and remote bindings to confirm labels are being exchanged for the expected prefixes.
End-to-End LSP Verification
Section titled “End-to-End LSP Verification”/tool traceroute 10.255.0.4 use-dns=noMPLS hops appear with their label values in the traceroute output, allowing you to trace the exact LSP a packet follows.
RSVP-TE Tunnel State
Section titled “RSVP-TE Tunnel State”/interface traffic-eng print/interface traffic-eng monitor [find]A tunnel in state=established with bandwidth shown confirms the RESV handshake completed and the LSP is forwarding.
Next Steps
Section titled “Next Steps”| Topic | Guide |
|---|---|
| Configure LDP and basic label switching | LDP: Label Distribution and LSPs |
| Set up traffic-engineered tunnels with explicit paths | MPLS Traffic Engineering and RSVP-TE |
| Provide layer-2 Ethernet VPN services | VPLS: Virtual Private LAN Service |
| Build isolated customer routing over shared core | MPLS L3VPN |