L2VPN on Junos using CCC/Martini/Kompella

There are three main ways to provide a point to point L2 link over MPLS on Junos. Below I’ll give a brief description and show how to configure all three.

For the below descriptions I’ll be using this simple topology. All these devices are running as logical-systems on a single MX5.

CE Configs

My two CE devices will be configured the same for all three types below:
CE T1:

root@MX5:T1> show configuration interfaces ge-1/1/0
unit 0 {
    family inet {
        address 2.2.2.1/24;
    }
}

CE BB:

root@MX5:BB> show configuration interfaces ge-1/1/3
unit 0 {
    family inet {
        address 2.2.2.2/24;
    }
}

CCC

Circuit Cross-Connect is a legacy type of L2 point to point link found in Junos. CCC over MPLS requires a unique RSVP LSP per circuit per direction. It needs a unique LSP as there is no VC (inner) label. Any frames going over an LSP belongs to a specific circuit.

Customers frames in either direction is encapsulated into another L2 frame with an RSVP label. This label determines which circuit the frame belongs to on the other side via the configuration.

The core network has already been set up with OSPF/RSVP/MPLS so I won’t go over that. I’ll concentrate on the PE boxes themselves. I need to create a regular RSVP LSP each way. The CCC config sits under the protocols connections stanza
R3

interfaces {
    ge-1/1/1 {
        encapsulation ethernet-ccc;
        unit 0 {
            family ccc;
        }
    }
}
protocols {
    rsvp {
        interface all;
    }
    mpls {
        label-switched-path TO-R6 {
            to 6.6.6.6;
            no-cspf;
        }
        interface all;
    }
    ospf {
        area 0.0.0.0 {
            interface all;
        }
    }
    connections {
        remote-interface-switch R6 {
            interface ge-1/1/1.0;
            transmit-lsp TO-R6;
            receive-lsp TO-R3;
        }
    }
}

R6 has a similar config back to R3. To confirm on the PE we can view via show connections:

root@MX5:R3> show connections | find R6
R6                                rmt-if      Up      Aug 26 16:41:00           2
  ge-1/1/1.0                        intf  Up
  TO-R6                             tlsp  Up
  TO-R3                             rlsp  Up

Finally we can test from the CE device:

root@MX5:T1> ping 2.2.2.2 rapid count 10
PING 2.2.2.2 (2.2.2.2): 56 data bytes
!!!!!!!!!!
--- 2.2.2.2 ping statistics ---
10 packets transmitted, 10 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.464/0.490/1.041/0.081 ms

Martini

Martini allows you to use the same LSP for multiple circuits. The circuits can be multiplexed as there is a second MPLS label in the stack. The bottom label is the VC label and it tells the PE what L2VPN the frame belongs to. Martini uses LDP to signal the VC label. Note that transport labels can use either LDP or RSVP. If using RSVP, you need to enable LDP on the loopback interfaces of both PE routers. Of course you still need at least two LSPs are LSPs are unidirectional.

The frame header will now contain an RSVP/LDP transport label as well as an LDP-signalled VC label. This allows the PEs to multiplex multiple circuits over the same LSP.

Martini is also configured under the protocols stanza.
This is R6’s config:

interfaces {
    ge-1/1/2 {
        encapsulation ethernet-ccc;
        unit 0 {
            family ccc;
        }
    }
}
protocols {
    rsvp {
        interface all;
    }
    mpls {
        label-switched-path TO-R3 {
            to 3.3.3.3;
            no-cspf;
        }
        interface all;
    }
    ospf {
        area 0.0.0.0 {
            interface all;
        }
    }
    ldp {
        interface lo0.6;
    }
    l2circuit {
        neighbor 3.3.3.3 {
            interface ge-1/1/2.0 {
                virtual-circuit-id 1;
            }
        }
    }
}

R3 and R6 should have a targeted LDP session and the circuit should be up:

root@MX5:R6> show ldp neighbor
Address            Interface          Label space ID         Hold time
3.3.3.3            lo0.6              3.3.3.3:0                39

root@MX5:R6> show l2circuit connections | find Neigh
Neighbor: 3.3.3.3
    Interface                 Type  St     Time last up          # Up trans
    ge-1/1/2.0(vc 1)          rmt   Up     Aug 26 17:28:31 2013           1
      Remote PE: 3.3.3.3, Negotiated control-word: Yes (Null)
      Incoming label: 299840, Outgoing label: 299904
      Negotiated PW status TLV: No
      Local interface: ge-1/1/2.0, Status: Up, Encapsulation: ETHERNET

All looks good. Check from CE to CE again:

root@MX5:T1> ping 2.2.2.2 rapid count 10
PING 2.2.2.2 (2.2.2.2): 56 data bytes
!!!!!!!!!!
--- 2.2.2.2 ping statistics ---
10 packets transmitted, 10 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.468/0.498/0.720/0.074 ms

Kompella

Kompella also uses a two label stack as Martini. The VC label is signaled via BGP. Once again your transport underneath can be either RSVP or LDP. Kompella has a lot more configuration than the above two, however as it uses BGP it means you can just add the l2vpn address family to your existing BGP deployment.

I’ll show R3’s config here:

interfaces {
    ge-1/1/1 {
        encapsulation ethernet-ccc;
        unit 0 {
            family ccc;
        }
    }
}
protocols {
    rsvp {
        interface all;
    }
    mpls {
        label-switched-path TO-R6 {
            to 6.6.6.6;
            no-cspf;
        }
        interface all;
    }
    bgp {
        group iBGP {
            local-address 3.3.3.3;
            family l2vpn {
                signaling;
            }
            peer-as 100;
            neighbor 6.6.6.6;
        }
    }
    ospf {
        area 0.0.0.0 {
            interface all;
        }
    }
}
routing-instances {
    CUS1 {
        instance-type l2vpn;
        interface ge-1/1/1.0;
        route-distinguisher 100:100;
        vrf-target target:100:1;
        protocols {
            l2vpn {
                encapsulation-type ethernet;
                interface ge-1/1/1.0;
                site T1 {
                    site-identifier 1;
                    interface ge-1/1/1.0;
                }
            }
        }
    }
}
routing-options {
    autonomous-system 100;
}

Kompella uses a routing-instance exactly like a regular L3VPN in Junos. This includes the RD and RT. We can check the BGP session:

root@MX5:R3> show bgp summary
Groups: 1 Peers: 1 Down peers: 0
Table          Tot Paths  Act Paths Suppressed    History Damp State    Pending
bgp.l2vpn.0
                       1          1          0          0          0          0
Peer                     AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped...
6.6.6.6                 100         35         35       0       0       13:52 Establ
  bgp.l2vpn.0: 1/1/1/0
  CUS1.l2vpn.0: 1/1/1/0

Final confirmation is to ping from CE to CE:

root@MX5:T1> ping 2.2.2.2 rapid count 10
PING 2.2.2.2 (2.2.2.2): 56 data bytes
!!!!!!!!!!
--- 2.2.2.2 ping statistics ---
10 packets transmitted, 10 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.469/0.497/0.714/0.072 ms

Conclusion

  • CCC cannot multiplex multiple circuits over the same LSP, hence once you start ramping up your circuits it’s not at all scalable.
  • CCC however is supported on Juniper’s EX3200 range which gives you a cheap option to offer L2VPN over MPLS
  • Martini and Kompella both multiplex over the same LSP by signalling VC labels
  • Martini is simple to configure and makes sense if you’re a small ISP offering a couple of links here and there, especially if you’re already running LDP
  • Kompella is a lot more complicated to configure, however it runs off the back of BGP. This means that you can use your existing BGP topology to run L3VPN/L2VPN/MGVPN/etc with a single signalling protocol.
  • BGP also scales incredibly well so its the best option for larger deployments.

Junos and IOS QoS – Part 4 of 4 – Hierarchical QoS

Brad Fleming from Kanren gave me remote access to a lab MX5 router in order to do the Junos section of this port for when I am very grateful!

There are many different needs for H-QoS and may different ways to configure it. I’m going to be going over one particular use case for H-QoS in which I use on a daily basis. More so than any other type of QoS, H-QoS is very hardware specific. Even line-card specific. In this post I’ll be using a Juniper MX5 and a Cisco ME3600X, both which allow me to do H-QoS on their gig ports.

My use case is as follows. Core gig ports are not cheap. ‘Revenue ports’ as ISPs like to call them. Most core kit has a load of gig ports, some 10Gb ports and maybe 40Gb/100Gb ports.

Not all customers want 1 gig link. Some want 10Mb, others 50Mb, some 300Mb. Heck some only want 4Mb. In order not to waste precious revenue ports, these circuits are aggregated into a single physical gig port. i.e. we can put 10 X 100Mb circuits onto a single gig link.

The bigggest problem with doing this is that it gets difficult to give QoS outbound back to the customer unless your hardware can do H-QoS. Let’s take the following port diagram as an example:

The physical port is 1Gb. Here I have two customer circuits attached. Customer A is paying for 20Mb while Customer B is paying for 70Mb. Not only do I want to shape their respective queues, I also want to give 30% priority bandwidth to each customer, inside each queue. So I need to shape vlan 2000 to 20Mb, and inside that 20Mb ensure 30% is given to EF packets.

IOS

In IOS I create the child and parent policies.

policy-map 30_70
 class EF
  priority
  police cir percent 30 conform-action transmit  exceed-action drop
 class class-default
  queue-limit percent 100
!
policy-map 20Mb
 class class-default
  shape average 20000000
   service-policy 30_70
!
policy-map 70Mb
 class class-default
  shape average 70000000
   service-policy 30_70

Each policy can then attach to an EVC outbound on a physical port:

ME3600X#sh run int gi0/1
Building configuration...

Current configuration : 674 bytes
!
interface GigabitEthernet0/1
 switchport trunk allowed vlan none
 switchport mode trunk
 mtu 9800
 service instance 1 ethernet
  description CUSTOMER1
  encapsulation dot1q 2000
  rewrite ingress tag pop 1 symmetric
  service-policy output 20Mb
  bridge-domain 150
 !
 service instance 2 ethernet
  description CUSTOMER2
  encapsulation dot1q 2001
  rewrite ingress tag pop 1 symmetric
  service-policy output 70Mb
  bridge-domain 150
 !
end

Junos

H-QoS on Junos is done using a traffic-control profile. This allows you to shape to a specific rate, attach a scheduler inside that profile, and attach that profile to an interface.
First let’s create our schedulers and scheduler-map:

darreno> show configuration class-of-service schedulers
EF {
    transmit-rate {
        percent 30;
        exact;
    }
    priority high;
}
BE {
    transmit-rate {
        remainder;
    }
}

darreno> show configuration class-of-service scheduler-maps
OUTBOUND {
    forwarding-class expedited-forwarding scheduler EF;
    forwarding-class best-effort scheduler BE;
}

Now we create our traffic profiles and attach the above scheduler-map to it;

darreno> show configuration class-of-service traffic-control-profiles
20Mb {
    scheduler-map OUTBOUND;
    shaping-rate 20m;
70Mb {
    scheduler-map OUTBOUND;
    shaping-rate 70m;
}

Attach the profile to the interface under class-of-service:

darreno> show configuration class-of-service interfaces
ge-1/0/0 {
    unit 2000 {
        output-traffic-control-profile 20Mb;
    }
    unit 2001 {
        output-traffic-control-profile 70Mb;
    }
}

Note that you need to configure hierarchical-scheduler under the interface itself:

darreno> show configuration interfaces ge-1/0/0
hierarchical-scheduler;
vlan-tagging;

unit 2000 {
    description "Customer 1";
    vlan-id 2000;
}
unit 2001 {
    description "Customer 2";
    vlan-id 2001;
}

Verification

IOS still has much better verification than Junos. I don’t know why Junos makes it so difficult to view this kind of information. When using service instances in IOS as above, the verification command has changed a bit, somewhat annoyingly.

ME3600X#sh ethernet service instance policy-map
  GigabitEthernet0/1: EFP 1

  Service-policy output: 20Mb

    Class-map: class-default (match-any)
      578 packets, 45186 bytes
      5 minute offered rate 1000 bps, drop rate 0000 bps
      Match: any
  Traffic Shaping
    Average Rate Traffic Shaping
    Shape 20000 (kbps)
      Output Queue:
        Default Queue-limit 49152 bytes
        Tail Packets Drop: 0
        Tail Bytes Drop: 0

      Service-policy : 30_70

        Class-map: EF (match-all)
          0 packets, 0 bytes
          5 minute offered rate 0000 bps, drop rate 0000 bps
          Match:  dscp ef (46)
          Strict Priority
          police:
            cir percent 30 % bc 250 ms
            cir 6000000 bps, bc 187500 bytes
            conform-action transmit
            exceed-action drop
          conform: 0 (packets) 0 (bytes)
          exceed: 0 (packets) 0 (bytes)
          conform: 0 bps, exceed: 0 bps
          Queue-limit current-queue-depth 0 bytes
              Output Queue:
                Default Queue-limit 49152 bytes
                Tail Packets Drop: 0
                Tail Bytes Drop: 0

        Class-map: class-default (match-any)
          578 packets, 45186 bytes
          5 minute offered rate 1000 bps, drop rate 0000 bps
          Match: any
          Queue-limit 100 percent
          Queue-limit current-queue-depth 0 bytes
              Output Queue:
                Default Queue-limit 49152 bytes
                Tail Packets Drop: 0
                Tail Bytes Drop: 0
  GigabitEthernet0/1: EFP 2

  Service-policy output: 70Mb

    Class-map: class-default (match-any)
      501 packets, 39092 bytes
      5 minute offered rate 2000 bps, drop rate 0000 bps
      Match: any
  Traffic Shaping
    Average Rate Traffic Shaping
    Shape 70000 (kbps)
      Output Queue:
        Default Queue-limit 49152 bytes
        Tail Packets Drop: 0
        Tail Bytes Drop: 0

      Service-policy : 30_70

        Class-map: EF (match-all)
          0 packets, 0 bytes
          5 minute offered rate 0000 bps, drop rate 0000 bps
          Match:  dscp ef (46)
          Strict Priority
          police:
            cir percent 30 % bc 250 ms
            cir 21000000 bps, bc 656250 bytes
            conform-action transmit
            exceed-action drop
          conform: 0 (packets) 0 (bytes)
          exceed: 0 (packets) 0 (bytes)
          conform: 0 bps, exceed: 0 bps
          Queue-limit current-queue-depth 0 bytes
              Output Queue:
                Default Queue-limit 49152 bytes
                Tail Packets Drop: 0
                Tail Bytes Drop: 0

        Class-map: class-default (match-any)
          501 packets, 39092 bytes
          5 minute offered rate 2000 bps, drop rate 0000 bps
          Match: any
          Queue-limit 100 percent
          Queue-limit current-queue-depth 0 bytes
              Output Queue:
                Default Queue-limit 49152 bytes
                Tail Packets Drop: 0
                Tail Bytes Drop: 0

Creating and connecting logical systems on the Juniper MX router

The old M10 I have in my lab cannot support the tunnel services PIC due to the ancient FEB it has. With the MX router and the correct line cards you can reserve some bandwidth to make a built-in tunnel PIC to use for things like GRE/Multicast/Logical-tunnels.

This is especially handy for when you have a single box and want to create a big topology with routers connected to each other. As a quick guide I’ll show an MX5 divided and connected into two logical-systems.

There are no physical interfaces plugged into anything. The box is simply on.

First we need to configure tunnel services:

darreno> show configuration chassis
fpc 1 {
    pic 0 {
        tunnel-services {
            bandwidth 1g;
        }
    }
}

This creates the lt interface in a specific place. Check this as you’ll need to know which numbers to refer to later:

darreno> show interfaces terse | match lt
lt-1/0/10               up    up

Let’s configure two systems. I’ll attach an lt interface to each, bind those two interfaces together and give each an IP address. I’ll also create a loopback interface in each and run OSPF:

darreno> show configuration logical-systems
J1 {
    interfaces {
        lt-1/0/10 {
            unit 0 {
                encapsulation ethernet;
                peer-unit 1;
                family inet {
                    address 10.0.0.1/24;
                }
            }
        }
        lo0 {
            unit 1 {
                family inet {
                    address 1.1.1.1/32;
                }
            }
        }
    }
    protocols {
        ospf {
            area 0.0.0.0 {
                interface all;
            }
        }
    }
}
J2 {
    interfaces {
        lt-1/0/10 {
            unit 1 {
                encapsulation ethernet;
                peer-unit 0;
                family inet {
                    address 10.0.0.2/24;
                }
            }
        }
        lo0 {
            unit 2 {
                family inet {
                    address 2.2.2.2/32;
                }
            }
        }
    }
    protocols {
        ospf {
            area 0.0.0.0 {
                interface all;
            }
        }
    }
}

To confirm I can log into one of them and check connectivity:

darreno> set cli logical-system J1
Logical system: J1

darreno:J1> show ospf neighbor
Address          Interface              State     ID               Pri  Dead
10.0.0.2         lt-1/0/10.0            Full      2.2.2.2          128    37

darreno:J1> ping 2.2.2.2 rapid
PING 2.2.2.2 (2.2.2.2): 56 data bytes
!!!!!
--- 2.2.2.2 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.432/0.492/0.675/0.092 ms

There are a few things to note. Logical tunnels are point to point. Even though the encapsulation is ethernet, you cannot connect more than 2 units to the same segment. You can also configure a unit on the main routing instance which can connect to a logical system. This is not only good for certification testing, but can open up all kinds of possibilities in a real-world design.

As long as you have a tunnel-services PIC on your M/T, or a Trio MPC/MIC on your MX router you are good to go with the above.

Junos and IOS QoS – Part 3 of 4 – Shaping to EVC speed with priority

An ethernet physical port can only run at certain speeds. i.e. 10/100/1Gb/etc – Often customer will purchase a sublevel of bandwidth on that bearer speed. For example Customer A wants to buy 30Mb of bandwidth. You can’t run the physicla ports at 30Mb, so the ISP will have the interface run at 100Mb and police inbound at 30Mb.

This makes QoS jus a little more complicated. All the ratios we’ve used in the past will ratio themselves at the WAN port’s physical speed. Also the router will not know that if 40Mb of burst comes from the LAN, that the actual bandwidth is only 30Mb.

In this case, you need to first shape all traffic to 30Mb, and then inside that shaped queue give priory bandwidth to voice etc..

IOS

IOS uses the concept of parent/child policy maps. The parent will shape the queue, while the child policy attached will give each queue their respective bandwidths and priority.

policy-map PARENT
 class class-default
  shape average 30000000
   service-policy CHILD
!
policy-map CHILD
 class EF
  priority percent 10
  police cir percent 10 conform-action transmit  exceed-action drop
 class class-default
  bandwidth remaining percent 100
!
interface FastEthernet0/0
 ip address 10.0.0.1 255.255.255.0
 service-policy output PARENT

In this policy the parent policy creates a queue with a bandwidth limit of 30Mb. Inside that policy rests another that gives EF packets 10 percent of priority bandwidth of that initial 30Mb queue. I’m also policing that queue as I don’t want the priority queue to starve other traffic. All other traffic gets 90-100% of the bandwidth, depending on how much priority traffic is in the queue at any one time.

Junos

As with most QoS topics, the following configuration is quite hardware specific. I’ve done the following on an SRX210H. Your configuration might change when doing the same sort of thing on a M/MX/DC SRX/etc so YMMV.

Create the schedulers:

darreno@JR2> show configuration class-of-service schedulers
EF10 {
    transmit-rate {
        percent 10;
        exact;
    }
}
BE_REST {
    transmit-rate {
        remainder {
            100;
        }
    }
}

Put the above schedulers into a schedule-map:

darreno@JR2> show configuration class-of-service scheduler-maps
SCHEDULE {
    forwarding-class expedited-forwarding scheduler EF10;
    forwarding-class best-effort scheduler BE_REST;
}

Finally apply that map to the interface under class-of-service and configure the interface shape rate:

darreno@JR2> show configuration class-of-service interfaces ge-0/0/1
unit 2001 {
    scheduler-map SCHEDULE;
    shaping-rate 30m;
}

In order for the above to work I need to configure per-unit-scheduler on the physical interface:

darreno@JR2> show configuration interfaces ge-0/0/1
per-unit-scheduler;

Verification

Simple again in IOS:

R1#sh policy-map int fa0/0
 FastEthernet0/0

  Service-policy output: PARENT

    Class-map: class-default (match-any)
      106 packets, 6360 bytes
      5 minute offered rate 0000 bps, drop rate 0000 bps
      Match: any
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 0/0/0
      (pkts output/bytes output) 106/6360
      shape (average) cir 30000000, bc 120000, be 120000
      target shape rate 30000000

      Service-policy : CHILD

        queue stats for all priority classes:
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops) 0/0/0
          (pkts output/bytes output) 0/0

        Class-map: EF (match-all)
          0 packets, 0 bytes
          5 minute offered rate 0000 bps, drop rate 0000 bps
          Match:  dscp ef (46)
          Priority: 10% (3000 kbps), burst bytes 75000, b/w exceed drops: 0

          police:
              cir 10 %
              cir 3000000 bps, bc 93750 bytes
            conformed 0 packets, 0 bytes; actions:
              transmit
            exceeded 0 packets, 0 bytes; actions:
              drop
            conformed 0000 bps, exceeded 0000 bps

        Class-map: class-default (match-any)
          106 packets, 6360 bytes
          5 minute offered rate 0000 bps, drop rate 0000 bps
          Match: any
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops) 0/0/0
          (pkts output/bytes output) 106/6360
          bandwidth remaining 100%

We can see the entire queue is 30Mb. Inside that queue EF traffic has priority bandwidth of 3000kbps (10% of 30Mb) – All other traffic has anything left up to 30Mb

On Junos its a bit cryptic again:

darreno@JR2> show class-of-service interface ge-0/0/1
Physical interface: ge-0/0/1, Index: 135
Queues supported: 8, Queues in use: 4
  Scheduler map: , Index: 2
  Congestion-notification: Disabled

  Logical interface: ge-0/0/1.2001, Index: 71
    Shaping rate: 30000000
    Object                  Name                   Type                    Index
    Scheduler-map           SCHEDULE               Output                   2878

I wanted to do a more in-depth post on H-QoS but this SRX doesn’t support it. I don’t currently have an MX in the lab (only in the field) so hopefully soon…