Always check the forwarding table – IOS, Junos, Netiron

Most bigger routers these days use a distributed system. One of the bigger differences is the separation on the control and forwarding plane. When troubleshooting or verifying it’s essential to view both. Too many engineers simply show the control plane output. While these should match, they don’t always. Note that the forwarding table doesn’t have to be distributed to different hardware.

For the examples below I’ll simply be viewing a default route learned through OSPF. The router in question will always have two equal costs out of the network so you would expect to see two routes.

IOS

First we check the routing table:

R1#sh ip route 0.0.0.0
Routing entry for 0.0.0.0/0, supernet
  Known via "ospf 1", distance 110, metric 1, candidate default path
  Tag 1, type extern 2, forward metric 2
  Last update from 10.0.12.2 on GigabitEthernet2/0, 00:00:33 ago
  Routing Descriptor Blocks:
  * 10.0.13.3, from 10.0.24.4, 00:00:33 ago, via GigabitEthernet1/0
      Route metric is 1, traffic share count is 1
      Route tag 1
    10.0.12.2, from 10.0.24.4, 00:00:33 ago, via GigabitEthernet2/0
      Route metric is 1, traffic share count is 1
      Route tag 1

Two ways to get to 0.0.0.0 – What does the forwarding table show? For this I’ll choose an IP that would follow the default route:

R1#sh ip cef 4.2.2.1
0.0.0.0/0
  nexthop 10.0.12.2 GigabitEthernet2/0
  nexthop 10.0.13.3 GigabitEthernet1/0

Both control plane and data plane agree.

Netiron

Routing table:

[email protected]#sh ip route 0.0.0.0
Type Codes - B:BGP D:Connected I:ISIS O:OSPF R:RIP S:Static; Cost - Dist/Metric
BGP  Codes - i:iBGP e:eBGP
ISIS Codes - L1:Level-1 L2:Level-2
OSPF Codes - i:Inter Area 1:External Type 1 2:External Type 2 s:Sham Link
STATIC Codes - d:DHCPv6
        Destination        Gateway         Port          Cost          Type Uptime src-vrf
1       0.0.0.0/0          10.0.0.1        eth 15/1      110/110       O1   1h22m  -
        0.0.0.0/0          10.0.0.2        eth 16/1      110/110       O1   1h22m  -

In order to show the forwarding table you use show route x.x.x.x detail. Note that I’m executing this command on an XMR16 and I will get the forwarding entry for every single module. I’m going to only show the output for the first module:

[email protected]#sh ip route 4.2.2.1 detail
Type Codes - B:BGP D:Connected I:ISIS O:OSPF R:RIP S:Static; Cost - Dist/Metric
BGP  Codes - i:iBGP e:eBGP
ISIS Codes - L1:Level-1 L2:Level-2
OSPF Codes - i:Inter Area 1:External Type 1 2:External Type 2 s:Sham Link
STATIC Codes - d:DHCPv6
        Destination        Gateway         Port          Cost          Type Uptime src-vrf
1       0.0.0.0/0          10.0.0.1        eth 15/1      110/110       O1   1h24m  -
        0.0.0.0/0          10.0.0.1        eth 16/1      110/110       O1   1h24m  -
        Nexthop Entry ID:65540, Paths: 2, Ref_Count:707/712

D:Dynamic  P:Permanent  F:Forward  U:Us  C:Connected Network E: ESI VLAN
W:Wait ARP  I:ICMP Deny  K:Drop  R:Fragment  S:Snap Encap N:CamInvalid

Module S1:
      IP Address         Next Hop        MAC              Type  Port  Vlan  Pri
      0.0.0.0/0          10.0.0.1       0012.f293.a802   PF    16/1   1     0

      OutgoingIf  ArpIndex PPCR_ID   CamLevel   Parent  DontAge Index Is_trunk
      eth 16/1    5        1:1       31              0               0 0

      U_flags   Entry_flags  Age   Cam:Index               HW_Path_count
      0000e000               0     0x0005ffff (L3, right)  2

        CAM Entry Flag: 00000001H
        PPCR : 1:1 CIDX: 0x0005ffff (L3, right) (IP_NETWORK: 0x56000)

        pram_index_programmed: ppcr[0] 0x0000014c

The output is a little cryptic so I’ll highlight the important bits. First the paths show as two:

Nexthop Entry ID:65540, Paths: 2, Ref_Count:707/712

But the actual next-hop is only showing a single:

     0.0.0.0/0          10.0.0.1       0012.f293.a802   PF    16/1   1     0

This is a cosmetic error. The most important bit is here:

      U_flags   Entry_flags  Age   Cam:Index               HW_Path_count
      0000e000               0     0x0005ffff (L3, right)  2

The hardware path count is two, which is what we expect.

Junos

Finally Junos. First up we look at the route table:

[email protected]_SRX6> show route 0.0.0.0

inet.0: 32 destinations, 32 routes (29 active, 3 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0.0.0.0/0          *[OSPF/150] 00:00:12, metric 0, tag 0
                      to 172.30.0.17 via ge-0/0/4.126
                    > to 172.30.0.89 via ge-0/0/4.146

Two routes, our forwarding table should match?

[email protected]_SRX6> show route forwarding-table destination 4.2.2.1
Routing table: default.inet
Internet:
Destination        Type RtRef Next hop           Type Index NhRef Netif
default            user     1 0:c:29:86:21:55    ucst   584    13 ge-0/0/4.146
default            perm     0                    rjct    36     5

Routing table: __master.anon__.inet
Internet:
Destination        Type RtRef Next hop           Type Index NhRef Netif
default            perm     0                    rjct   534     1

Well no, it doesn’t. While the route table shows two routes, only one is being used by the forwarding table. Junos will not install multiple next-hops into the forwarding-table unless you tell it to:

[email protected]_SRX6> show configuration policy-options policy-statement BALANCE
then {
    load-balance per-packet;
}
[email protected]_SRX6> show configuration routing-options forwarding-table
export BALANCE;

Let’s check again:

[email protected]_SRX6> show route forwarding-table destination 4.2.2.1
Routing table: default.inet
Internet:
Destination        Type RtRef Next hop           Type Index NhRef Netif
default            user     1                    ulst 262142     7
                              0:c:29:25:21:57    ucst   612    11 ge-0/0/4.126
                              0:c:29:86:21:55    ucst   584     9 ge-0/0/4.146
default            perm     0                    rjct    36     5

Routing table: __master.anon__.inet
Internet:
Destination        Type RtRef Next hop           Type Index NhRef Netif
default            perm     0                    rjct   534     1

This time we have both in the forwarding table. Note that while the policy states load-blance per-packet, it’s actually doing per-flow load-sharing.

Conclusion

I have seen routers disagree as to what they think they are doing compared to what they are doing. You need to check both tables above to note what both are doing. This could help immensely when a router is dropping packets it’s supposed to be forwarding, due to your FIB having no entry. I might write a bit on this as I’ve seen it happen more than once.

EDIT – 04/11/13

I’ve since found another way to verify this on the Brocades. If you rconsole onto the line card itself you can see a bit more:

[email protected]#rconsole 1
Remote connection to LP slot 1 established
Press CTRL-X or type 'exit' to disconnect it
LP-1>en
LP-1#sh ip network 0.0.0.0
D:Dynamic  P:Permanent  F:Forward  U:Us  C:Connected Network
W:Wait ARP  I:ICMP Deny  K:Drop  R:Fragment  S:Snap Encap N:CamInvalid
      IP Address         Next Hop        MAC              Type  Port  Vlan  Pri
      0.0.0.0/0          10.0.0.1*    0012.f293.ad02   PF    15/1*  1     0

      OutgoingIf  ArpIndex PPCR_ID   CamLevel   Parent  DontAge Index Is_trunk
      eth 15/1    4        1:1       31              0               0 0

      U_flags   Entry_flags  Age   Cam:Index               HW_Path_count
      0000e000  0x00000001   0     0x0005ffff (L3, right)  2

        CAM Entry Flag: 00000001H
        PPCR : 1:1 CIDX: 0x0005ffff (L3, right) (IP_NETWORK: 0x56000)

        pram_index_programmed: ppcr[0] 0x0000014c
use_index: 0
IP-nh-Pram 0: 0x2ebeec10, ref_count 1
n_paths = 2, type = ECMP_PHY_VE, is_default  = 1, vrf_index = 0
  path[0]: FORWARD, out_intf eth 15/1, nh 10.0.0.1, out_port 15/1, is_trunk 0
  path[1]: FORWARD, out_intf eth 16/1, nh 10.0.0.5, out_port 16/1, is_trunk 0
Pram info: alloc_count 2 use_count 2
  pram[0]: idx 0, pram_idx[0] 0x0000014c
  pram[1]: idx 1, pram_idx[0] 0x0000014d

The top half still shows a single port, but down it shows this:

n_paths = 2, type = ECMP_PHY_VE, is_default  = 1, vrf_index = 0
  path[0]: FORWARD, out_intf eth 15/1, nh 10.0.0.1, out_port 15/1, is_trunk 0
  path[1]: FORWARD, out_intf eth 16/1, nh 10.0.0.5, out_port 16/1, is_trunk 0

n paths is the number of paths. The router is also doing ECMP. It then shows which ports outbound it’ll send traffic.

On a route with only a single hop the bit above are shown as so:

n_paths = 1, type = NON_ECMP, is_default  = 0, vrf_index = 0
  path[0]: FORWARD, out_intf eth 1/20, nh 10.0.0.8, out_port 1/20, is_trunk 0

ME3600X Bridge Group into a Brocade VPLS for H-QoS

The Cisco ME3600X and ME3800X series support both VPLS and H-VPLS. MPLS features require an expensive license. If you are currenlty joining a few leased lines together into a VPLS you don’t always need the more expensive MPLS license.

Any carrier circuit terminating on the Cisco ME can be placed into a bridge-group based on the dot1q tag. This can then be passed along or re-written to a new vlan tag into a Brocade XMR port which stick that frame into a VPLS. Why not just terminate the carrier circuit directly onto the XMR? I’ve recently had to do this because the XMR line cards we have do not support H-QoS. More and more carriers are aggregating many B end circuits into single high-bandwidth A end gig links. I need to be able to shape traffic outbound on a per-vlan basis, and within that queue give priority to certain queues. The ME3600X can do H-QoS and its exactly the reason I’m using it.

Let’s show a quick diagram so we know what we are talking about. Click the image for the full image:

On the right I have two SRX210s and an 1841 sending tagged traffic into the carrier network. That carrier network is multiplexing all three circuits into a single link on the A end. That goes into the ME3600X. From the ME3600X it goes off to a Brocade XMR. That Brocade is connected to another Brocade over the MPLS core and finally to another 1841 on the other side.

On the ME3600X I could easily span each vlan over to the XMR and have the XMR VPLS them back. The issue with that is that if hosts behind SRX is sending a ton of traffic to a host at SRX2, why waste bandwidth hairpinning traffic over to the XMR when it could be done at the ME3600X?

In other words, instead of doing this:

We do something like this:

Now you may be asking, why not just get this carrier to stick them in the VPLS? That could work if all these links were coming from the same carrier, but often they aren’t.

ME3600X EVC config

ethernet evc TESTLAB
!
interface GigabitEthernet0/1
 description Link to Carrier
 switchport trunk allowed vlan none
 switchport mode trunk
 service instance 1 ethernet TESTLAB
  description SRX1
  encapsulation dot1q 2000
  rewrite ingress tag pop 1 symmetric
  bridge-domain 150
 !
 service instance 2 ethernet TESTLAB
  description SRX2
  encapsulation dot1q 2001
  rewrite ingress tag pop 1 symmetric
  bridge-domain 150
 !
 service instance 3 ethernet TESTLAB
  description 1841
  encapsulation dot1q 2002 second-dot1q 100
  rewrite ingress tag pop 2 symmetric
  bridge-domain 150
 !
interface GigabitEthernet0/24
 description Link to XMR
 switchport trunk allowed vlan none
 switchport mode trunk
 service instance 150 ethernet
  description VPLS Core
  encapsulation dot1q 150
  rewrite ingress tag pop 1 symmetric
  bridge-domain 150
 !
end

I have three service instances configured on gi0/1 – each matching the vlan id that the carrier is using for transport. Notice that I can match on both an outer and inner tag that I’m originating from the 1841. Gi0/24 is the port connected to the XMR and that’s in the same bridge-group with a vlan tag of 150.

On the XMR side I’m simply matching on vlan 150 and placing those frames in the VPLS.

Brocade Config

vpls DARREN-TESTING 3200
  vpls-peer 172.10.10.1
  vlan 150
   tagged ethe 2/20

Connectivity verification

All my CPEs are running OSPF on their WAN links. I’ve also hard-coded their MAC addresses so it’ll be easy to see in this post.

[email protected]> show ospf neighbor
Address          Interface              State     ID               Pri  Dead
10.0.0.4         ge-0/0/1.2000          Full      4.4.4.4            1    39
10.0.0.3         ge-0/0/1.2000          2Way      3.3.3.3            1    34
10.0.0.2         ge-0/0/1.2000          Full      2.2.2.2          128    32

SRX1 has three neighbours. Fully adjacent with the DR and BDR. This means I get get to the remote VPLS 1841. Let’s take a look at the mac address table for the VPLS on the PE connected to the ME3600x:

[email protected]#sh mac vpls 3200

Total MAC entries for VPLS 3200: 5 (Local: 3, Remote: 2)

VPLS       MAC Address    L/R Port  Vlan(In-Tag)/Peer ISID      Age
====       ===========    === ====  ================= ====      ===
3200       0000.1111.0000 L   2/20  150               NA        0
3200       0000.2222.0000 L   2/20  150               NA        0
3200       0000.3333.0000 L   2/20  150               NA        0
3200       0000.4444.0000 R   1/10  172.10.10.1       NA        0

The MAC’s for SRX1, SRX2, and the first 1841 are all via tag 150 out interface 2/20. The remote 1841’s MAC is learned via the remote PE router.

We should see all four MACs on thee ME3600X out their respective tagged ports:

ME3600X#show mac address-table bridge-domain 150 | begin DYNAMIC
 150    0000.1111.0000    DYNAMIC     Gi0/1+Efp1
 150    0000.2222.0000    DYNAMIC     Gi0/1+Efp2
 150    0000.3333.0000    DYNAMIC     Gi0/1+Efp3
 150    0000.4444.0000    DYNAMIC     Gi0/24+Efp150

The ME3600X is also telling us which service instance under the physical port it’s learning the MAC address from.

QoS Config

Let’s assume the the circuit tagged with vlan 2000 has only got 5Mb, lan 2001 has 10Mb, and vlan 2001 has 15Mb. I want to shape each EVC to their respective speed, and then give priority to DSCP EF packets. I also want to police that queue to 50% to ensure that priority queue cannot hog the link.

class-map match-all EF
 match dscp ef
!
policy-map QoS
 class EF
  priority
  police cir percent 50
   conform-action transmit
   exceed-action drop
 class class-default
  queue-limit percent 100
!
policy-map VLAN2000
 class class-default
  shape average 5000000
   service-policy QoS
policy-map VLAN2001
 class class-default
  shape average 10000000
   service-policy QoS
policy-map VLAN2002
 class class-default
  shape average 15000000
   service-policy QoS

Each policy is then attached to the service instance itself. I’ll use service instance 1 as an example here:

interface GigabitEthernet0/1
 service instance 1 ethernet TESTLAB
  description SRX1
  encapsulation dot1q 2000
  rewrite ingress tag pop 1 symmetric
  service-policy output VLAN2000
  bridge-domain 150

Each service instance can have an individual policy. So we have broken up the physical port into many virtual circuits. Each VC has their own shaper and priority queue.

QoS Verification

If you apply the policy as above, you can’t use show policy-map interface anymore. Instead you need to use show ethernet service instance policy-map

ME3600X#show ethernet service instance policy-map
  GigabitEthernet0/1: EFP 1

  Service-policy output: VLAN2000

    Class-map: class-default (match-any)
      8 packets, 644 bytes
      5 minute offered rate 0000 bps, drop rate 0000 bps
      Match: any
  Traffic Shaping
    Average Rate Traffic Shaping
    Shape 5000 (kbps)
      Output Queue:
        Default Queue-limit 49152 bytes
        Tail Packets Drop: 0
        Tail Bytes Drop: 0

      Service-policy : QoS

        Class-map: EF (match-all)
          0 packets, 0 bytes
          5 minute offered rate 0000 bps, drop rate 0000 bps
          Match:  dscp ef (46)
          Strict Priority
          police:
            cir percent 50 % bc 250 ms
            cir 2500000 bps, bc 78000 bytes
            conform-action transmit
            exceed-action drop
          conform: 0 (packets) 0 (bytes)
          exceed: 0 (packets) 0 (bytes)
          conform: 0 bps, exceed: 0 bps
          Queue-limit current-queue-depth 0 bytes
              Output Queue:
                Default Queue-limit 49152 bytes
                Tail Packets Drop: 0
                Tail Bytes Drop: 0

        Class-map: class-default (match-any)
          8 packets, 644 bytes
          5 minute offered rate 0000 bps, drop rate 0000 bps
          Match: any
          Queue-limit 100 percent
          Queue-limit current-queue-depth 0 bytes
              Output Queue:
                Default Queue-limit 49152 bytes
                Tail Packets Drop: 0
                Tail Bytes Drop: 0

OSPF packets are going through and that’s being matched by class-default. We can force some EF traffic by pinging with a TOS value:

[email protected]> ping 10.0.0.1 rapid tos 184
PING 10.0.0.1 (10.0.0.1): 56 data bytes
!!!!!
--- 10.0.0.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 3.766/4.635/7.131/1.269 ms
ME3600X#show ethernet service instance policy-map
  GigabitEthernet0/1: EFP 1

  Service-policy output: VLAN2000

    Class-map: class-default (match-any)
      328 packets, 26028 bytes
      5 minute offered rate 2000 bps, drop rate 0000 bps
      Match: any
  Traffic Shaping
    Average Rate Traffic Shaping
    Shape 5000 (kbps)
      Output Queue:
        Default Queue-limit 49152 bytes
        Tail Packets Drop: 0
        Tail Bytes Drop: 0

      Service-policy : QoS

        Class-map: EF (match-all)
          5 packets, 530 bytes
          5 minute offered rate 0000 bps, drop rate 0000 bps
          Match:  dscp ef (46)
          Strict Priority
          police:
            cir percent 50 % bc 250 ms
            cir 2500000 bps, bc 78000 bytes
            conform-action transmit
            exceed-action drop
          conform: 5 (packets) 510 (bytes)
          exceed: 0 (packets) 0 (bytes)
          conform: 0 bps, exceed: 0 bps
          Queue-limit current-queue-depth 0 bytes
              Output Queue:
                Default Queue-limit 49152 bytes
                Tail Packets Drop: 0
                Tail Bytes Drop: 0

        Class-map: class-default (match-any)
          323 packets, 25498 bytes
          5 minute offered rate 2000 bps, drop rate 0000 bps
          Match: any
          Queue-limit 100 percent
          Queue-limit current-queue-depth 0 bytes
              Output Queue:
                Default Queue-limit 49152 bytes
                Tail Packets Drop: 0
                Tail Bytes Drop: 0

There we see the five EF packets.

So there you have it. Not too difficult at all to get the basics working.

VPLS Interop – Junos and Netiron – Part 1 of 2

VPLS is a LAN emulation service that can be run over an MPLS backbone. I do not have a ton of free devices on hand and so my acual lab will only consist of two CE devices even though it supports more than two. VPLS runs over an MPLS core constructed by RSVP-TE or LDP LSP tunnels. VPLS itself requires either LDP (RFC 4762) or BGP (RFC 4761) as the VC singnalling protocol. Note that when LDP is used, this is a targeted LDP session and has nothing to do with the protocol you use for the LSPs itself. To prove this I will be using RSVP-TE LSP tunnels with LDP and BGP on top.

Part one of this series will use LDP as the VC signalling protocol. Part two will use BGP.

I was originally going to include Cisco’s IOS as well, but you need a 6500 or 7600 and I don’t have a spare. The 7200 platform does not support VPLS.

This the the topology I’m going to use:

R2 and R3 are P routers. R1 is a Junos PE and R8 is a Netiron PE. R6 and R10 are both CPEs

The actual P router config is standard RSVP-TE which I have covered extensively on this site already. The CPE’s have both been configured to be in the same subnet (10.0.0.0/24)

PE Config

Junos

darreno> show configuration interfaces fe-0/0/2
encapsulation ethernet-vpls;
unit 0 {
    family vpls;
}

darreno> show configuration protocols ldp
interface lo0.1;

darreno> show configuration routing-instances
MELLOWD-VPLS {
    instance-type vpls;
    interface fe-0/0/2.0;
    protocols {
        vpls {
            vpls-id 150;
            neighbor 8.8.8.8;
        }
    }
}

On Junos you need to enable LDP on the loopback interface, even when running RSVP. You need to ensure VPLS encapsulation is on the physical interface. Finally you need to create the VPLS instance and tie this all together. You specify neighbours under the process. This actually creates the targeted LDP sessions (i.e. there is no need to specify a T-LDP session separately)

PE Config

Netiron

Brocade’s Netiron config is actually very simple compared to the above

router mpls
 mpls-interface ve2

  vpls MELLOWD-VPLS 150
  vpls-peer 1.1.1.1
  vpls-mtu 1500
  vlan 550
   tagged ethe 3/19

That’s it. When you enable rsvp on an interface, and then set up a VPLS with a neighbour, it automatically sets up a T-LDP session with it’s peer. Under the VLAN I’ve said tagged eth 3/19 as it’ll be receiving tagged frames from the CPE router.

Verification

Control Plane

Let’s check to see if the session is actually up:

darreno> show vpls connections
Layer-2 VPN connections:

Legend for connection status (St)
[deleted for brevity]

Legend for interface status
Up -- operational
Dn -- down

Instance: MELLOWD-VPLS
  VPLS-id: 150
    Neighbor                  Type  St     Time last up          # Up trans
    8.8.8.8(vpls-id 150)      rmt   Up     Mar  8 13:52:45 2013           1
      Remote PE: 8.8.8.8, Negotiated control-word: No
      Incoming label: 800000, Outgoing label: 983040
      Negotiated PW status TLV: No
      Local interface: vt-0/2/0.1048579, Status: Up, Encapsulation: ETHERNET
        Description: Intf - vpls MELLOWD-VPLS neighbor 8.8.8.8 vpls-id 150
[email protected]_R8# show mpls vpls id 150
VPLS MELLOWD-VPLS, Id 150, Max mac entries: 8192
 Routing Interface Id 150
 Total vlans: 1, Tagged ports: 1 (1 Up), Untagged ports 0 (0 Up)
 IFL-ID: n/a
  Vlan 550
   L2 Protocol: NONE
   Tagged: ethe 3/19
 VC-Mode: Raw
 Total VPLS peers: 1 (1 Operational)
 Peer address: 1.1.1.1, State: Operational, Uptime: 55 min
  Tnnl in use: tnl2(299984)[RSVP]    Peer Index:0
  Local VC lbl: 983040, Remote VC lbl: 800000
  Local VC MTU: 1500, Remote VC MTU: 1500
  Local VC-Type: Ethernet(0x05), Remote VC-Type: Ethernet(0x05)
 CPU-Protection: OFF
 Local Switching: Enabled
 Extended Counter: ON
 Multicast Snooping: Disabled

Both sessions are up. Both see the others VC labels. Can we show that LDP is actually used?

darreno> show ldp neighbor
Address            Interface          Label space ID         Hold time
8.8.8.8            lo0.1              8.8.8.8:0                36
[email protected]_R8#sh mpls ldp neighbor
 Number of link neighbors: 0
 Number of targeted neighbors: 1

Nbr Transport       Interface         Nbr LDP ID          Max Hold  Time Left
1.1.1.1             (targeted)        1.1.1.1:0           45        42

Verification

Data Plane

So can our CPE’s ping each others?

R10#ping 10.0.0.6

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.0.6, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/3/12 ms
USER6:R6> ping 10.0.0.10 rapid count 5
PING 10.0.0.10 (10.0.0.10): 56 data bytes
!!!!!
--- 10.0.0.10 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 1.368/1.460/1.755/0.148 ms

No problems there. It’s always handy to check MAC addresses learned over the VPLS. We can check like so:

darreno> show route forwarding-table family vpls
Routing table: MELLOWD-VPLS.vpls
VPLS:
Destination        Type RtRef Next hop           Type Index NhRef Netif
default            perm     0                    rjct  1007     1
fe-0/0/2.0         user     0                    comp  1150     2
vt-0/2/0.1048579   user     0                    comp  1258     2
00:12:f2:93:a9:00/48 dynm     0                  indr 262142     5
                              10.0.4.14         Push 983040, Push 299920(top)  1233     2 ae1.13
00:13:19:22:8f:91/48 dynm     0                  ucst  1146     4 fe-0/0/2.0
00:90:69:a5:13:f1/48 dynm     0                  ucst  1146     4 fe-0/0/2.0
8c:b6:4f:63:46:b8/48 dynm     0                  indr 262142     5
                              10.0.4.14         Push 983040, Push 299920(top)  1233     2 ae1.13
[email protected]_R8#sh mac vpls 150

Total MAC entries for VPLS 150: 4 (Local: 1, Remote: 3)

VPLS       MAC Address    L/R/IB Port  Vlan(In-Tag)/Peer ISID      Age  Type
====       ===========    ====== ====  ================= ====      ===  ====
150        8cb6.4f63.46b8 L      3/19  550               NA        70   NA
150        0a0a.0009.0004 R      3/7   1.1.1.1           NA        250  NA
150        0013.1922.8f91 R      3/7   1.1.1.1           NA        0    NA
150        0090.69a5.13f1 R      3/7   1.1.1.1           NA        70   NA

Both outputs show local and remote locations of MAC addresses. Both also show the neighbour ID of who has the directly connected MAC address.

Management over the VPLS

A handy new feature on the Netiron is the ability to have a layer3 interface over the VPLS. This can be handy when you need to manage the CPE devices. While in the past you may need to have a ‘break-in’ interface also attached to the VPLS, you can now do it directly on the Netiron.

interface ve 150
 ip address 10.0.0.8/24
!
router mpls

 vpls MELLOWD-VPLS 150
  router-interface ve 150

This essentially works like in SVI interface on a vlan. Let’s check if we have communication:

[email protected]_R8#ping 10.0.0.10
Sending 1, 16-byte ICMP Echo to 10.0.0.10, timeout 5000 msec, TTL 64
Type Control-c to abort
Reply from 10.0.0.10       : bytes=16 time=1ms TTL=255
Success rate is 100 percent (1/1), round-trip min/avg/max=1/1/1 ms.
[email protected]_R8#ping 10.0.0.6
Sending 1, 16-byte ICMP Echo to 10.0.0.6, timeout 5000 msec, TTL 64
Type Control-c to abort
Reply from 10.0.0.6        : bytes=16 time=1ms TTL=64
Success rate is 100 percent (1/1), round-trip min/avg/max=1/1/1 ms.

IPv6 over IPv4 MPLS Core Interop – IOS, Junos, Netiron – Part 2 of 2 – 6VPE

This is part two of my blog started here: http://mellowd.co.uk/ccie/?p=3300

Same diagram as last time:

This time each CPE is going to be connected to a VRF on the PE router. I’m only using one customer for this post, but this is regular L3VPN so scale as you see fit.

Major issue with the Netiron. It doesn’t support the VPNV6 adress family :( – I’m using the latest 5.4b code and nothing. So this means this is a Junos/IOS lab only

CPE config

All the CPEs are running BGP with their directly connected PE routers. All are advertising reachability to their IPv6 loopback addresses to their PE router. I’m only showing R6’s config as the others are the same with different addresses:

interfaces {
    ae1 {
        unit 36 {
            vlan-id 36;
            family inet6 {
                address 2001:db8:36::6/64;
            }
        }
    lo0 {
        unit 6 {
            family inet6 {
                address 2001:db8:6666::6666/128;
            }
        }
    }
}
protocols {
    bgp {
        group PROVIDER {
            family inet6 {
                unicast;
            }
            export LOOPBACK;
            neighbor 2001:db8:36::3 {
                peer-as 100;
            }
        }
    }
}
policy-options {
    policy-statement LOOPBACK {
        from {
            protocol direct;
            route-filter 2001:db8:6666::6666/128 exact;
        }
        then accept;
    }
}
routing-options {
    router-id 6.6.6.6;
    autonomous-system 65123 loops 2;
}

You’ll need to statically define your router-id for all sites. If a router is running ONLY IPv6, or your VRF ONLY has a IPv6 address, then the router has no IPv4 address to choose it’s router-id from. This will be a common theme throughout as you’ll also need to set router-ids in IPv6-only VRF instances.

PE config

Junos

First we need to set up the VRF to the customer and run BGP. We then need to enable the VPNV6 family in BGP. I’m going to remove the old IPv6 unicast config used in part one of this series.

USER3:R3> show configuration protocols
mpls {
    ipv6-tunneling;
    interface ae1.13;
}
bgp {
    group 6VPE {
        family inet6-vpn {
            unicast;
        }
        peer-as 100;
        neighbor 4.4.4.4;
    }
}

USER3:R3> show configuration routing-instances
CUSTOMER1 {
    instance-type vrf;
    interface fe-0/0/3.36;
    route-distinguisher 3.3.3.3:1;
    vrf-target target:100:1;
    routing-options {
        router-id 3.3.3.3;
    }
    protocols {
        bgp {
            group EXTERNAL {
                advertise-peer-as;
                family inet6 {
                    unicast;
                }
                neighbor 2001:db8:36::6 {
                    peer-as 65123;
                }
            }
        }
    }
}

IPv6 address family running with the customer. VPNv6 address family running with IOS PE R4. Note that I have to use ‘advertise-peer-as’ on R3 as Junos will not advertise a route to an AS that already has the AS number in the path by default.

IOS

The main issue with IOS is that I cannot statically definate a BGP router-id if I’m ONLY running IPv6. BGP requires a router-id on the x.x.x.x format. IOS does not give me the option to hard-code a router-id under the BGP process for the VRF, or the ipv6 unicast address family. So I had to enable the ipv4 address-family under the VRF and define a loopback address in the VRF to use as the router-id. Very silly indeed.

vrf definition CUSTOMER1
 rd 4.4.4.4:100
 !
 address-family ipv4
 exit-address-family
 !
 address-family ipv6
 route-target export 100:1
 route-target import 100:1
 exit-address-family
!
interface Loopback4
 vrf forwarding CUSTOMER1
 ip address 4.4.4.4 255.255.255.255
!
router bgp 100
 bgp router-id vrf auto-assign
 no bgp default ipv4-unicast
 bgp log-neighbor-changes
 neighbor 3.3.3.3 remote-as 100
 neighbor 3.3.3.3 update-source Loopback0
 !
 address-family vpnv6
  neighbor 3.3.3.3 activate
  neighbor 3.3.3.3 send-community extended
 exit-address-family
 !
 address-family ipv6 vrf CUSTOMER1
  no synchronization
  neighbor 2001:DB8:47::7 remote-as 65123
  neighbor 2001:DB8:47::7 activate
 exit-address-family

VRF assigned to the CE-PE link. IPv6 unicast running with the CPE and VPNv6 running with the Junos PE R3 router.

Verification

Let’s first check if our VPNv6 sessions are up:

7200_SRD_R4#show bgp vpnv6 unicast all   neighbors 3.3.3.3 | include state|fam$
  BGP state = Established, up for 03:09:47
    Address family VPNv6 Unicast: advertised and received
 For address family: VPNv6 Unicast
Connection state is ESTAB, I/O status: 1, unread input bytes: 0
USER3:R3> show bgp neighbor 4.4.4.4 | match "Estab|NLRI"
  Type: Internal    State: Established    Flags: 
  NLRI for restart configured on peer: inet6-vpn-unicast
  NLRI advertised by peer: inet6-vpn-unicast
  NLRI for this session: inet6-vpn-unicast

Sessions are up and running the VPNv6 family.

Can the CE’s ping each other from their IPv6 loopbacks?

USER7:R7> ping 2001:db8:6666::6666 source 2001:db8:7777::7777 rapid count 5
PING6(56=40+8+8 bytes) 2001:db8:7777::7777 --> 2001:db8:6666::6666
!!!!!
--- 2001:db8:6666::6666 ping6 statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/std-dev = 1.520/1.726/1.997/0.195 ms
USER6:R6> ping 2001:db8:7777::7777 source 2001:db8:6666::6666 rapid count 5
PING6(56=40+8+8 bytes) 2001:db8:6666::6666 --> 2001:db8:7777::7777
!!!!!
--- 2001:db8:7777::7777 ping6 statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/std-dev = 1.533/1.706/1.968/0.147 ms

No problems there :)

IPv6 over IPv4 MPLS Core Interop – IOS, Junos, Netiron – Part 1 of 2 – 6PE

I wanted to test 6PE and 6VPE interoperability with the three major vendors. As always I’m stuck with IOS only in the Cisco world for now, but what can I do. This test will run over a Junos MPLS core. All my MPLS labs thus far has been using RSVP, so let’s change this to LDP for now just to mix things up a bit.

6PE allows you to run IPv6 transport over a IPv4 MPLS core. MPLS does not have native label support for IPv6 addresses, at least yet. This means if you need to transport IPv6 traffic over your MPLS core, you need to tunnel it over IPv4. 6PE is one of those ways. 6VPE is essentially MPLS layer 3 VPN for IPv6 over an IPv4 as opposed to 6PE which is simple IPv6 over an IPv4 MPLS core.

6PE

There is no need to worry about CPE kit for now. I’ll simply have an IPv6 loopback address on R3, R4, and R8. These PE routers will peer over MP-BGP over the IPv4-only core.

R3 – Junos

interfaces {
    ae1 {
        unit 13 {
            vlan-id 13;
            family inet {
                address 10.0.4.13/30;
            }
            family inet6;
            family mpls;
        }
    lo0 {
        unit 3 {
            family inet {
                address 3.3.3.3/32;
            }
            family inet6 {
                address 2001:db8:3333::3333/128;
            }
        }
    }
}
protocols {

    mpls {
        ipv6-tunneling;
        interface ae1.13;
    }
    bgp {
        group 6PE {
            family inet6 {
                labeled-unicast {
                    explicit-null;
                }
            }
            export LOOPBACK;
            peer-as 100;
            neighbor 4.4.4.4;
            neighbor 8.8.8.8;
        }
    }
    ldp {
        interface ae1.13;
    }
}
policy-options {
    policy-statement LOOPBACK {
        from {
            protocol direct;
            route-filter 2001:db8:3333::3333/128 exact;
        }
        then accept;
    }
}
routing-options {
    autonomous-system 100;
}

Junos requires you to active the family inet6 address family on the core-facing interface, even if no address is applied. LDP is configured. BGP has been configured with family inet6 address family only. You also need to send labelled unicast as well as explicit-null. Junos will not commit if you leave this out.

I’ve then redistributed my IPv6 loopback address into BGP.

R4 – IOS

interface Loopback6
 no ip address
 ipv6 address 2001:DB8:4444::4444/128
!
interface Loopback0
 ip address 4.4.4.4 255.255.255.255
 ip ospf 1 area 0
!
interface FastEthernet1/0.24
 encapsulation dot1Q 24
 ip address 10.0.4.9 255.255.255.252
 ip ospf network point-to-point
 mpls ip
!
router bgp 100
 no bgp default ipv4-unicast
 bgp log-neighbor-changes
 neighbor 3.3.3.3 remote-as 100
 neighbor 3.3.3.3 update-source Loopback0
 neighbor 8.8.8.8 remote-as 100
 neighbor 8.8.8.8 update-source Loopback0
 !
 address-family ipv6
  no synchronization
  network 2001:DB8:4444::4444/128
  neighbor 3.3.3.3 activate
  neighbor 3.3.3.3 send-label
  neighbor 8.8.8.8 activate
  neighbor 8.8.8.8 send-label
 exit-address-family

IOS is a bit easier. Create my loopback, IPv6 unicast BGP sessions with send-label configured, and advertise IPv6 loopback.

R8 – Netiron

interface loopback 1
 ip ospf area 0
 ip address 8.8.8.8/32
 ipv6 address 2001:db8:8888::8888/128
!
router bgp
 local-as 100
 next-hop-mpls
 neighbor 3.3.3.3 remote-as 100
 neighbor 3.3.3.3 update-source 8.8.8.8
 neighbor 4.4.4.4 remote-as 100
 neighbor 4.4.4.4 update-source 8.8.8.8

 address-family ipv6 unicast
 network 2001:db8:8888::8888/128
 neighbor 3.3.3.3 activate
 neighbor 3.3.3.3 send-label
 neighbor 4.4.4.4 activate
 neighbor 4.4.4.4 send-label
 exit-address-family
!
router mpls

 mpls-interface ve2
  ldp-enable

Very similar to IOS here.

Verification

First let’s see if each of our boxes has the IPv6 routes to the others loopbacks:

USER3:R3> show route 2001:db8:4444::4444/128

inet6.0: 9 destinations, 10 routes (9 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

2001:db8:4444::4444/128
                   *[BGP/170] 00:19:31, MED 0, localpref 100, from 4.4.4.4
                      AS path: I
                    > to 10.0.4.14 via ae1.13, Push 16, Push 300016(top)

USER3:R3> show route 2001:db8:8888::8888/128

inet6.0: 9 destinations, 10 routes (9 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

2001:db8:8888::8888/128
                   *[BGP/170] 21:40:12, MED 0, localpref 100, from 8.8.8.8
                      AS path: I
                    > to 10.0.4.14 via ae1.13, Push 794624, Push 300048(top)
7200_SRD_R4#show ipv6 route 2001:DB8:3333::3333/128
Routing entry for 2001:DB8:3333::3333/128
  Known via "bgp 100", distance 200, metric 0, type internal
  Route count is 1/1, share count 0
  Routing paths:
    3.3.3.3%default indirectly connected
      MPLS Required
      Last updated 00:20:47 ago

7200_SRD_R4#show ipv6 route 2001:DB8:8888::8888/128
Routing entry for 2001:DB8:8888::8888/128
  Known via "bgp 100", distance 200, metric 0, type internal
  Route count is 1/1, share count 0
  Routing paths:
    8.8.8.8%default indirectly connected
      MPLS Required
      Last updated 00:21:00 ago
[email protected]_R8#show ipv6 route 2001:db8:3333::3333/128
Type Codes - B:BGP C:Connected I:ISIS L:Local O:OSPF R:RIP S:Static
BGP  Codes - i:iBGP e:eBGP
ISIS Codes - L1:Level-1 L2:Level-2
OSPF Codes - i:Inter Area 1:External Type 1 2:External Type 2
STATIC Codes - d:DHCPv6
Type IPv6 Prefix           Next Hop Router    Interface     Dis/Metric     Uptime src-vrf
Bi   2001:db8:3333::3333/128
                           ::                 LDP (5)       200/0          8m3s   -
label information: 2(OUT)
[email protected]_R8#show ipv6 route 2001:db8:4444::4444/128
Type Codes - B:BGP C:Connected I:ISIS L:Local O:OSPF R:RIP S:Static
BGP  Codes - i:iBGP e:eBGP
ISIS Codes - L1:Level-1 L2:Level-2
OSPF Codes - i:Inter Area 1:External Type 1 2:External Type 2
STATIC Codes - d:DHCPv6
Type IPv6 Prefix           Next Hop Router    Interface     Dis/Metric     Uptime src-vrf
Bi   2001:db8:4444::4444/128
                           ::                 LDP (3)       200/0          7m25s  -
label information: 16(OUT)

Control plane looks fine. Routes are installed with next-hops associated with labels. Let’s see if data actually flows:

USER3:R3> ping 2001:db8:4444::4444 source 2001:db8:3333::3333 rapid count 5
PING6(56=40+8+8 bytes) 2001:db8:3333::3333 --> 2001:db8:4444::4444
!!!!!
--- 2001:db8:4444::4444 ping6 statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/std-dev = 1.262/1.399/1.789/0.196 ms
7200_SRD_R4#ping 2001:DB8:8888::8888 source lo6

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 2001:DB8:8888::8888, timeout is 2 seconds:
Packet sent with a source address of 2001:DB8:4444::4444
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 0/0/0 ms
[email protected]_R8#ping ipv6 2001:db8:3333::3333 source 2001:db8:8888::8888 count 5
Sending 5, 16-byte ICMPv6 Echo to 2001:db8:3333::3333
timeout 5000 msec, Hop Limit 64
Type Control-c to abort
Reply from 2001:db8:3333::3333: bytes=16 time=1ms Hop Limit=64
Reply from 2001:db8:3333::3333: bytes=16 time<1ms Hop Limit=64
Reply from 2001:db8:3333::3333: bytes=16 time<1ms Hop Limit=64
Reply from 2001:db8:3333::3333: bytes=16 time<1ms Hop Limit=64
Reply from 2001:db8:3333::3333: bytes=16 time<1ms Hop Limit=64
Success rate is 100 percent (5/5), round-trip min/avg/max=0/0/1 ms.

All looks good to me.

You can find part 2 here: hhttp://mellowd.co.uk/ccie/?p=3546