The Cisco ME3600X and ME3800X series support both VPLS and H-VPLS. MPLS features require an expensive license. If you are currenlty joining a few leased lines together into a VPLS you don’t always need the more expensive MPLS license.

Any carrier circuit terminating on the Cisco ME can be placed into a bridge-group based on the dot1q tag. This can then be passed along or re-written to a new vlan tag into a Brocade XMR port which stick that frame into a VPLS. Why not just terminate the carrier circuit directly onto the XMR? I’ve recently had to do this because the XMR line cards we have do not support H-QoS. More and more carriers are aggregating many B end circuits into single high-bandwidth A end gig links. I need to be able to shape traffic outbound on a per-vlan basis, and within that queue give priority to certain queues. The ME3600X can do H-QoS and its exactly the reason I’m using it.

Let’s show a quick diagram so we know what we are talking about. Click the image for the full image:
Carrier Ports 2 300x195 ME3600X Bridge Group into a Brocade VPLS for H QoS

On the right I have two SRX210s and an 1841 sending tagged traffic into the carrier network. That carrier network is multiplexing all three circuits into a single link on the A end. That goes into the ME3600X. From the ME3600X it goes off to a Brocade XMR. That Brocade is connected to another Brocade over the MPLS core and finally to another 1841 on the other side.

On the ME3600X I could easily span each vlan over to the XMR and have the XMR VPLS them back. The issue with that is that if hosts behind SRX is sending a ton of traffic to a host at SRX2, why waste bandwidth hairpinning traffic over to the XMR when it could be done at the ME3600X?

In other words, instead of doing this:
VPLS PIN1 ME3600X Bridge Group into a Brocade VPLS for H QoS

We do something like this:
ME3400 PIN ME3600X Bridge Group into a Brocade VPLS for H QoS

Now you may be asking, why not just get this carrier to stick them in the VPLS? That could work if all these links were coming from the same carrier, but often they aren’t.

ME3600X EVC config

ethernet evc TESTLAB
!
interface GigabitEthernet0/1
 description Link to Carrier
 switchport trunk allowed vlan none
 switchport mode trunk
 service instance 1 ethernet TESTLAB
  description SRX1
  encapsulation dot1q 2000
  rewrite ingress tag pop 1 symmetric
  bridge-domain 150
 !
 service instance 2 ethernet TESTLAB
  description SRX2
  encapsulation dot1q 2001
  rewrite ingress tag pop 1 symmetric
  bridge-domain 150
 !
 service instance 3 ethernet TESTLAB
  description 1841
  encapsulation dot1q 2002 second-dot1q 100
  rewrite ingress tag pop 2 symmetric
  bridge-domain 150
 !
interface GigabitEthernet0/24
 description Link to XMR
 switchport trunk allowed vlan none
 switchport mode trunk
 service instance 150 ethernet
  description VPLS Core
  encapsulation dot1q 150
  rewrite ingress tag pop 1 symmetric
  bridge-domain 150
 !
end

I have three service instances configured on gi0/1 – each matching the vlan id that the carrier is using for transport. Notice that I can match on both an outer and inner tag that I’m originating from the 1841. Gi0/24 is the port connected to the XMR and that’s in the same bridge-group with a vlan tag of 150.

On the XMR side I’m simply matching on vlan 150 and placing those frames in the VPLS.

Brocade Config

vpls DARREN-TESTING 3200
  vpls-peer 172.10.10.1
  vlan 150
   tagged ethe 2/20

Connectivity verification

All my CPEs are running OSPF on their WAN links. I’ve also hard-coded their MAC addresses so it’ll be easy to see in this post.

darreno@JR1> show ospf neighbor
Address          Interface              State     ID               Pri  Dead
10.0.0.4         ge-0/0/1.2000          Full      4.4.4.4            1    39
10.0.0.3         ge-0/0/1.2000          2Way      3.3.3.3            1    34
10.0.0.2         ge-0/0/1.2000          Full      2.2.2.2          128    32

SRX1 has three neighbours. Fully adjacent with the DR and BDR. This means I get get to the remote VPLS 1841. Let’s take a look at the mac address table for the VPLS on the PE connected to the ME3600x:

SSH@pe2#sh mac vpls 3200

Total MAC entries for VPLS 3200: 5 (Local: 3, Remote: 2)

VPLS       MAC Address    L/R Port  Vlan(In-Tag)/Peer ISID      Age
====       ===========    === ====  ================= ====      ===
3200       0000.1111.0000 L   2/20  150               NA        0
3200       0000.2222.0000 L   2/20  150               NA        0
3200       0000.3333.0000 L   2/20  150               NA        0
3200       0000.4444.0000 R   1/10  172.10.10.1       NA        0

The MAC’s for SRX1, SRX2, and the first 1841 are all via tag 150 out interface 2/20. The remote 1841′s MAC is learned via the remote PE router.

We should see all four MACs on thee ME3600X out their respective tagged ports:

ME3600X#show mac address-table bridge-domain 150 | begin DYNAMIC
 150    0000.1111.0000    DYNAMIC     Gi0/1+Efp1
 150    0000.2222.0000    DYNAMIC     Gi0/1+Efp2
 150    0000.3333.0000    DYNAMIC     Gi0/1+Efp3
 150    0000.4444.0000    DYNAMIC     Gi0/24+Efp150

The ME3600X is also telling us which service instance under the physical port it’s learning the MAC address from.

QoS Config

Let’s assume the the circuit tagged with vlan 2000 has only got 5Mb, lan 2001 has 10Mb, and vlan 2001 has 15Mb. I want to shape each EVC to their respective speed, and then give priority to DSCP EF packets. I also want to police that queue to 50% to ensure that priority queue cannot hog the link.

class-map match-all EF
 match dscp ef
!
policy-map QoS
 class EF
  priority
  police cir percent 50
   conform-action transmit
   exceed-action drop
 class class-default
  queue-limit percent 100
!
policy-map VLAN2000
 class class-default
  shape average 5000000
   service-policy QoS
policy-map VLAN2001
 class class-default
  shape average 10000000
   service-policy QoS
policy-map VLAN2002
 class class-default
  shape average 15000000
   service-policy QoS

Each policy is then attached to the service instance itself. I’ll use service instance 1 as an example here:

interface GigabitEthernet0/1
 service instance 1 ethernet TESTLAB
  description SRX1
  encapsulation dot1q 2000
  rewrite ingress tag pop 1 symmetric
  service-policy output VLAN2000
  bridge-domain 150

Each service instance can have an individual policy. So we have broken up the physical port into many virtual circuits. Each VC has their own shaper and priority queue.

QoS Verification

If you apply the policy as above, you can’t use show policy-map interface anymore. Instead you need to use show ethernet service instance policy-map

ME3600X#show ethernet service instance policy-map
  GigabitEthernet0/1: EFP 1

  Service-policy output: VLAN2000

    Class-map: class-default (match-any)
      8 packets, 644 bytes
      5 minute offered rate 0000 bps, drop rate 0000 bps
      Match: any
  Traffic Shaping
    Average Rate Traffic Shaping
    Shape 5000 (kbps)
      Output Queue:
        Default Queue-limit 49152 bytes
        Tail Packets Drop: 0
        Tail Bytes Drop: 0

      Service-policy : QoS

        Class-map: EF (match-all)
          0 packets, 0 bytes
          5 minute offered rate 0000 bps, drop rate 0000 bps
          Match:  dscp ef (46)
          Strict Priority
          police:
            cir percent 50 % bc 250 ms
            cir 2500000 bps, bc 78000 bytes
            conform-action transmit
            exceed-action drop
          conform: 0 (packets) 0 (bytes)
          exceed: 0 (packets) 0 (bytes)
          conform: 0 bps, exceed: 0 bps
          Queue-limit current-queue-depth 0 bytes
              Output Queue:
                Default Queue-limit 49152 bytes
                Tail Packets Drop: 0
                Tail Bytes Drop: 0

        Class-map: class-default (match-any)
          8 packets, 644 bytes
          5 minute offered rate 0000 bps, drop rate 0000 bps
          Match: any
          Queue-limit 100 percent
          Queue-limit current-queue-depth 0 bytes
              Output Queue:
                Default Queue-limit 49152 bytes
                Tail Packets Drop: 0
                Tail Bytes Drop: 0

OSPF packets are going through and that’s being matched by class-default. We can force some EF traffic by pinging with a TOS value:

darreno@JR2> ping 10.0.0.1 rapid tos 184
PING 10.0.0.1 (10.0.0.1): 56 data bytes
!!!!!
--- 10.0.0.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 3.766/4.635/7.131/1.269 ms
ME3600X#show ethernet service instance policy-map
  GigabitEthernet0/1: EFP 1

  Service-policy output: VLAN2000

    Class-map: class-default (match-any)
      328 packets, 26028 bytes
      5 minute offered rate 2000 bps, drop rate 0000 bps
      Match: any
  Traffic Shaping
    Average Rate Traffic Shaping
    Shape 5000 (kbps)
      Output Queue:
        Default Queue-limit 49152 bytes
        Tail Packets Drop: 0
        Tail Bytes Drop: 0

      Service-policy : QoS

        Class-map: EF (match-all)
          5 packets, 530 bytes
          5 minute offered rate 0000 bps, drop rate 0000 bps
          Match:  dscp ef (46)
          Strict Priority
          police:
            cir percent 50 % bc 250 ms
            cir 2500000 bps, bc 78000 bytes
            conform-action transmit
            exceed-action drop
          conform: 5 (packets) 510 (bytes)
          exceed: 0 (packets) 0 (bytes)
          conform: 0 bps, exceed: 0 bps
          Queue-limit current-queue-depth 0 bytes
              Output Queue:
                Default Queue-limit 49152 bytes
                Tail Packets Drop: 0
                Tail Bytes Drop: 0

        Class-map: class-default (match-any)
          323 packets, 25498 bytes
          5 minute offered rate 2000 bps, drop rate 0000 bps
          Match: any
          Queue-limit 100 percent
          Queue-limit current-queue-depth 0 bytes
              Output Queue:
                Default Queue-limit 49152 bytes
                Tail Packets Drop: 0
                Tail Bytes Drop: 0

There we see the five EF packets.

So there you have it. Not too difficult at all to get the basics working.

Tagged with:  

Welcome to part three of many. In this post we’ll go over a Junos-IOS draft rosen interop. This entire time we have been doing draft rosen 6, otherwise known as ASM draft rosen. We could also do draft rosen 7, which is SSM. For now we’ll stick with draft rosen 6.

Same topology as last time. I’ve now added a Junos PE and CE router into the mix.
multicastiosjunos Multicast over L3VPN   Part 3 of X   Junos and IOS interop

Junos PE core multicast config

I’m not going to show the basic IGP and MPLS config as that’s already been covered before.

darreno@J1> show configuration protocols pim
rp {
    static {
        address 3.3.3.3;
    }
}
interface ge-0/0/1.0;
interface lo0.0;

Junos VRF PIM config

darreno@J1> show configuration routing-instances
A {
    instance-type vrf;
    interface ge-0/0/2.0;
    interface lo0.1;
    route-distinguisher 100:1;
    vrf-target target:100:1;
    protocols {
        ospf {
            export EXPORT;
            area 0.0.0.0 {
                interface ge-0/0/2.0;
                interface lo0.1;
            }
        }
        pim {
            vpn-group-address 239.10.10.10;
            interface ge-0/0/2.0;
            interface lo0.1;
        }
    }
}

I’ve configured the default MDT group within the VRF config much like IOS. In order that Junos and IOS be compatible, I have to have a loopback with the same address as the local BGP peering loopback and that needs to be in the VRF. I’ve added lo0.1 to the above config, and this is the address configured:

darreno@J1> show configuration interfaces lo0
unit 0 {
    family inet {
        address 77.77.77.77/32;
    }
}
unit 1 {
    family inet {
        address 77.77.77.77/32;
    }
}

I need to add the MDT group to my MP-BGP config:

darreno@J1> show configuration protocols bgp
group L3MVPN {
    local-address 77.77.77.77;
    family inet-vpn {
        unicast;
        multicast;
    }
    family inet-mdt {
        signaling;
    }
    peer-as 100;
    neighbor 2.2.2.2;
    neighbor 4.4.4.4;
    neighbor 5.5.5.5;
}

The CE router is configured as normal enterprise multicast. R9 is still advertising itself as a BSR and RP candidate.

Verification

JR1 should see the three other PE routers plus the locally attached CE router. Note that Junos will use an mt interface (multicast tunnel)

darreno@J1> show pim neighbors instance A
B = Bidirectional Capable, G = Generation Identifier
H = Hello Option Holdtime, L = Hello Option LAN Prune Delay,
P = Hello Option DR Priority, T = Tracking Bit

Instance: PIM.A
Interface           IP V Mode        Option       Uptime Neighbor addr
ge-0/0/2.0           4 2             HPLGT      00:30:30 10.0.78.8
mt-0/0/0.32768       4 2             HPG        00:15:40 2.2.2.2
mt-0/0/0.32768       4 2             HPG        00:00:27 4.4.4.4
mt-0/0/0.32768       4 2             HPG        00:15:40 5.5.5.5

JR2, our new CE, should see R9 as the RP:

darreno@J2> show pim rps
Instance: PIM.master

Address family INET
RP address      Type        Mode   Holdtime Timeout Groups Group prefixes
9.9.9.9         bootstrap   sparse       25      19      3 224.0.0.0/4

Final confirmation

Let’s join the gorup 225.5.5.5 on JR2:

darreno@J2> show configuration protocols igmp
interface ge-0/0/2.0 {
    static {
        group 225.5.5.5;
    }
}

NOTE: Junos will not respond to a ping sent to a multicast group. In order for your router to respond, you can add protocols sap (group_IP)

R9#ping 225.5.5.5 repeat 1

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 225.5.5.5, timeout is 2 seconds:

Reply to request 0 from 10.0.78.8, 28 ms
Reply to request 0 from 10.0.128.12, 56 ms
Reply to request 0 from 10.0.128.12, 52 ms
Reply to request 0 from 10.0.78.8, 40 ms

Both JR2 and R12 are responding as expected.

End of draft rosen

Draft rosen certainly works. However as noted in part 1, multicast is required in the ISP core and this is not a very scalable technology, even when using the data MDT. One of the biggest issues you have is that for every multicast customer, your PE routers will maintain tunnels and PIM adjacencies with each other. If you have 10 PE routers with 4 multicast customers, that’s an awful lot of state running in your core. In parts four and onwards I’ll cover some of the newer ways that we can do MPLS L3 VPN Multicast.

Tagged with:  

In part 1, we left off with verifying that multicast traffic was going from R9 to R12. Let’s go deep at how those packets move through the network.

Let’s remind ourselves of the network we are working on
MDT 2 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT
R9 is sending a multicast packet off to group 225.5.5.5 which R12 has joined.

default MDT traffic flow

In the CE network, this traffic would be standard multicast. Let’s verify by taking a packet capture. I won’t show the very first and very last hop as that is standard CE multicast.

R1 to R2 link. This is a standard multicast packet going form the CE edge to the PE router.
Screen Shot 2013 06 09 at 11.18.11 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT

R2 to R3 link. R2 encapsulates the entire CE multicast packet into a GRE tunnel. This GRE tunnel has a source address of 2.2.2.2, R2′s loopback, while the destination address is 239.10.10.10. This is the defaul MDT address we’ve chosen for this customer.
Screen Shot 2013 06 09 at 11.21.23 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT

R3 to R5 link. Continuing through the ISP network, we have that same GRE packet going through as a multicast frame destined to 239.10.10.10.
Screen Shot 2013 06 09 at 11.23.07 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT

R5 to R8 link. R5 will remove the GRE header and forward the original CE multicast packet per standard multicast behaviour.
Screen Shot 2013 06 09 at 11.24.50 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT

Problems we can assume from capture above

There is one scalability issue staring at us in the face. When R2 encapsulates this packet and sends it to the group address of 239.10.10.10, this packet will make its way to R4. R4 currently has no interested receivers, but the PE router itself is still part of that particular multicast group. We can see R3 is sending GRE packets off to R4 right now.
R3 to R4 link:
Screen Shot 2013 06 09 at 11.35.12 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT
In our test network this isn’t creating that much of an issue, but imagine that a source connected to R9 is sending 1080p video over multicast and this customer is connected to 100 different PE routers. If this stream is only between three offices, that stream will still be replicated to ALL PE routers involved in the same multicast VRF. To make this a bit more scalable we can instruct the routers to use the data MDT which I’ll expand on in the next section.

Another issue that we can get from above is that across the ISP core, these packets are being forwarded via multicast. We already went over the fact that the ISP needs to run multicast, but its important to note that these packets are NOT getting label switched. The time spent on your wonderful RSVP-TE with FRR MPLS network has no effect on these draft rosen packets.

Data MDT

To get around the scalability issue of the previous MDT (called the default MDT) we can tell the routers to switch to a new group if the amount of packets gets over a certain threshold. This second MDT group will have a new address, and PE routers who actually have live receivers will join the new group. Those that have no receivers will not join the new group. The encapsulated CE traffic will only flow through the new group. Let’s configure this and check traffic flow again.

On R2 I’ll add the following config to the existing config on R2:

vrf definition A
 address-family ipv4
  mdt data 239.11.11.11 0.0.0.0 threshold 1
  mdt data threshold 1

If the stream goes above 1Kb/s, switch the encapsulation of the customer CE multicast traffic to GRE group address 239.11.11.11 – Note that the data MDT config only needs to be configured on the PE router attached to the muticast source site. If all sites have potential sources, all PE routers would need to be configured that way.

I’ll now ensure R9 is sending larger ICMP multicast packets to force a switchover.

data MDT traffic flow

R2 has switched over to the new data MDT and is sending encapsulated frames to 239.11.11.11:
Screen Shot 2013 06 09 at 11.51.02 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT

R4 has no receivers, and so has not joined this data MDT. We can check the mroute table on R3 to note that it’s only sending to the group address 239.11.11.11 out its interface off to R5:

R3#sh ip mroute 239.11.11.11 | beg \(
(*, 239.11.11.11), 00:02:40/00:02:46, RP 3.3.3.3, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet3/0, Forward/Sparse, 00:02:40/00:02:46

(2.2.2.2, 239.11.11.11), 00:02:36/00:02:57, flags: T
  Incoming interface: GigabitEthernet1/0, RPF nbr 10.0.23.2
  Outgoing interface list:
    GigabitEthernet3/0, Forward/Sparse, 00:02:36/00:02:50

A packet capture on the R3-R4 link shows no traffic destined to 239.11.11.11 on that link.

How does R5 know what address to join? How did it know that it had to join 239.11.11.11? This part is signaled through a UDP control message. Once the multicast stream hits the threshold, R2 will send a UDP control message to the default mdt group:
Screen Shot 2013 06 19 at 10.05.25 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT
Inside this frame is a TLV which specifies the S,G entry plus the data MDT group to be used. We can infer from this that a switchover to the data MDT is only possible with S,G entries. *,G and bidir PIM groups will always use the default MDT, regardless of bandwidth.
R2 will send this update via a UDP frame destined to 224.0.0.13, the ALL-PIM multicast address. As this is still inside the customers multicast network it is encapsulated in the default MDT itself. You can see this from the packet capture above.
Let’s verify this on the three PE routers:

R2#show ip pim vrf A mdt send
MDT-data send list for VRF: A
  (source, group)                     MDT-data group/num   ref_count
  (9.9.9.9, 225.5.5.5)                239.11.11.11         1

R2 is saying that it has a source for the 225.5.5.5 group. Any PE who has receivers, please join 239.11.11.1

R5#show ip pim vrf A mdt receive detail | beg \[
Joined MDT-data [group/mdt number : source]  uptime/expires for VRF: A
 [239.11.11.11 : 0.0.0.0]  00:09:22/00:01:36
  (9.9.9.9, 225.5.5.5), 00:51:36/00:01:36/00:01:36, OIF count: 1, flags: TY

R5 has an interested listener and has joined the new data MDT.

R4 has cached this information, but as it has no interested listeners it has not joined the new data MDT. If we added a new listener connected to R4, it would join the new group. Likewise R5 will remove itself from the group if it has no more interested listeners. R2 will send an MDT UDP update message once per minute to refresh all the other PE routers.

Note that when using SSM in the ISP core, BGP will be used to signal the above information.

Join me for part three where we will go over draft rosen inter-op between IOS and Junos

Tagged with:  

mVPN, or Multicast VPN, is a pretty big subject. I’d like to go over a lot of details and so this will become a series of posts. How many I don’t know yet, which is why I’m using X in the title.

I’ll try and start with the more basic types of mVPN and then move onto the more complicated stuff. I’m most certainly not going to go over the basics of multicast or l3vpn themselves otherwise this series would stretch to 20 posts long.

Let’s take the following topology into consideration:
Screen Shot 2013 06 05 at 22.00.38 Multicast over L3VPN   Part 1 of X   draft rosen concept and configuration
R2, R3, R4, and R5 are the ISP routers providing a L3VPN service to Mr Customer. R2, R4, and R5 are the PE routers while R3 is a P router. The core network is currently running OSPF and LDP. The PE routers are exchanging VPNv4 routes via MP-BGP.

Now Mr. Customer would like to start running multicast within their network. These multicast packets will not be able to run natively over the ISP core, as the source address could be a private address in the customer’s VPN. Therefore the P routers would never know the source. Also if we did try to run multicast natively, we would have a problem in that no two customers would be able to run the same group.

One solution is for the customer to run GRE tunnels between his WAN routers. i.e. he would manually set up a full mesh of tunnels from R1 to R6, R1 to R7, and R1 to R8. He’ll also need to ensure all his other WAN edge routers have tunnels to all others. This works, but it’s a lot of work and upkeep for the customer to do.
MDT 1 Multicast over L3VPN   Part 1 of X   draft rosen concept and configuration

Why can’t this ISP sell this as an added service?

The first ISP method of doing this is called draft rosen. This draft has now expanded into RFC 6037, but is still called draft rosen by many.

Draft rosen, in a nutshell, makes the PE routers do the hard work. Instead of manually configuring GRE tunnels from CE to CE, the PE routers will automatically set up GRE tunnels. In order to set up a GRE tunnel, you need a source and destination address. In draft rosen the source address will be the PE’s loopback address, while the destination address will be a multicast address dedicated to the customer. Hang on, multicast destination? Yes that’s correct. If we look at customer 1 for example, we can configure the multicast group address of 239.10.10.10 and dedicate it to them. Each PE router will attempt to form GRE tunnels with other PE routers. Each PE router also joins the group 239.10.10.10 in the ISP core. This way they all send GRE tunneled traffic from themselves which will end up at all other PE routers thanks to destination address being a multicast address. These tunnels create whats called the MDT or multicast distribution tree.
MDT 2 Multicast over L3VPN   Part 1 of X   draft rosen concept and configuration

This does mean a few things though. For one we need to enable multicast in the ISP core network. In order for the PE routers to create the MDT to make their GRE tunnels. Customer multicast traffic will then be encapsulated in those GRE packets and sent off to the other PE routers in the same mVPN.

Configuration

Let’s take a look at configuring this.
R3:

ip multicast-routing
!
interface Loopback0
 ip address 3.3.3.3 255.255.255.255
 ip pim sparse-mode
 ip ospf 1 area 0
!
interface GigabitEthernet1/0
 ip address 10.0.23.3 255.255.255.0
 ip pim sparse-mode
 ip ospf 1 area 0
 mpls ip
!
interface GigabitEthernet2/0
 ip address 10.0.34.3 255.255.255.0
 ip pim sparse-mode
 ip ospf 1 area 0
 mpls ip
!
interface GigabitEthernet3/0
 ip address 10.0.35.3 255.255.255.0
 ip pim sparse-mode
 ip ospf 1 area 0
 mpls ip
!
ip pim rp-address 3.3.3.3

To keep things simple, I’m using a static RP address for the core. In the real-world this could be static, auto-rp, or BSR. Either of the three could also use anycast and MSDP if you so wished.

The MDT group will be configured on the PE routers. Each VRF will need to have multicast enabled, and then an MDT address defined for that VRF. PIM will be enabled on the CE-facing interface, and it also needs to be enabled on the loopback interface which it’s using for it’s VPNv4 peering. Finally it will need PIM enabled on the core facing interface itself. This is R2′s relevant config:

vrf definition A
 rd 100:1
 route-target export 100:1
 route-target import 100:1
 !
 address-family ipv4
  mdt default 239.10.10.10
 exit-address-family
!
ip multicast-routing
ip multicast-routing vrf A
!
interface Loopback0
 ip address 2.2.2.2 255.255.255.255
 ip pim sparse-mode
 ip ospf 1 area 0
!
interface GigabitEthernet1/0
 ip address 10.0.23.2 255.255.255.0
 ip pim sparse-mode
 ip ospf 1 area 0
 mpls ip
!
interface FastEthernet2/0
 vrf forwarding A
 ip address 10.0.12.2 255.255.255.0
 ip pim sparse-mode
 ip ospf 2 area 0
!
ip pim rp-address 3.3.3.3

R4 and R5 have a similar config.

At this point, multicast has not yet been enabled in the customer network. What we should see in the core is that all three PE routers are sending traffic to 239.10.10.10 – This is the MDT set up. All three will join the *,239.10.10.10 group. Once the PE routers all get traffic from other PEs over this group, they will all join the S,G group directly. We can verify this on R3:

R3#show ip mroute | beg \(
(*, 239.10.10.10), 00:12:46/00:02:45, RP 3.3.3.3, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet1/0, Forward/Sparse, 00:09:12/00:02:45
    GigabitEthernet2/0, Forward/Sparse, 00:12:46/00:02:30
    GigabitEthernet3/0, Forward/Sparse, 00:12:46/00:02:34

(5.5.5.5, 239.10.10.10), 00:12:46/00:03:16, flags: T
  Incoming interface: GigabitEthernet3/0, RPF nbr 10.0.35.5
  Outgoing interface list:
    GigabitEthernet1/0, Forward/Sparse, 00:09:12/00:02:45
    GigabitEthernet2/0, Forward/Sparse, 00:12:46/00:02:37

(4.4.4.4, 239.10.10.10), 00:12:46/00:03:14, flags: T
  Incoming interface: GigabitEthernet2/0, RPF nbr 10.0.34.4
  Outgoing interface list:
    GigabitEthernet1/0, Forward/Sparse, 00:09:12/00:02:45
    GigabitEthernet3/0, Forward/Sparse, 00:12:46/00:02:34

(2.2.2.2, 239.10.10.10), 00:12:46/00:03:01, flags: T
  Incoming interface: GigabitEthernet1/0, RPF nbr 10.0.23.2
  Outgoing interface list:
    GigabitEthernet2/0, Forward/Sparse, 00:12:46/00:02:34
    GigabitEthernet3/0, Forward/Sparse, 00:12:46/00:02:34

The OIL for *,239.10.10.10 is out to all three PE routers. All three are also sources, and so you see three S,G groups, each with a source of their loopbacks. This part is all automatic, so if I add another PE router, I just need to add it to the MDT group, enable PIM, and all the other PE routers will set up new GRE tunnels automatically.

Once the GRE tunnels are setup, the PE routers will form a PIM adjacency automatically over the multipoint tunnel inside the customer’s VRF. Let’s take a look at R2′s PIM neighbours:

R2#sh ip pim vrf A neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
10.0.12.1         FastEthernet2/0          00:03:17/00:01:23 v2    1 / S P G
4.4.4.4           Tunnel2                  00:01:20/00:01:23 v2    1 / S P G
5.5.5.5           Tunnel2                  00:01:49/00:00:53 v2    1 / DR S P G

fa2/0 is the CE-facing interface. Tunnel2 is the multipoint GRE interface going to R4 and R5. Over the tunnel interface R2 has two adjacencies. R5 is elected as the DR (thanks to its highest IP address)

As far as the customer is concerned, they can just run a standard multicast set up. It doesn’t what what version they are running either. I’ll make R9 announce itself as the BSR and RP, and that should filter to all other customer sites.

R9#sh run | inc candidate
ip pim bsr-candidate Loopback0 0
ip pim rp-candidate Loopback0 interval 10

Let’s go to R12 in another site to see if we get the RP mapping information:

R12#sh ip pim rp map
PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4
  RP 9.9.9.9 (?), v2
    Info source: 9.9.9.9 (?), via bootstrap, priority 0, holdtime 150
         Uptime: 00:08:15, expires: 00:02:17

At this point I should be able to join a group on R12 and source traffic from R9. Let’s test this:

R12
interface FastEthernet1/0
 ip igmp join-group 225.5.5.5
R9#ping 225.5.5.5 repeat 2
Type escape sequence to abort.
Sending 2, 100-byte ICMP Echos to 225.5.5.5, timeout is 2 seconds:

Reply to request 0 from 10.0.128.12, 88 ms
Reply to request 1 from 10.0.128.12, 132 ms

The CE network shows regular muticast working. Let’s check the PE routers mroute tables:

R5#sh ip mroute vrf A 225.5.5.5 | beg \(
(*, 225.5.5.5), 00:11:51/00:03:24, RP 9.9.9.9, flags: S
  Incoming interface: Tunnel1, RPF nbr 2.2.2.2
  Outgoing interface list:
    FastEthernet2/0, Forward/Sparse, 00:11:51/00:03:24

(9.9.9.9, 225.5.5.5), 00:02:00/00:01:29, flags: T
  Incoming interface: Tunnel1, RPF nbr 2.2.2.2
  Outgoing interface list:
    FastEthernet2/0, Forward/Sparse, 00:02:00/00:03:28

R5 shows that the incoming RPF interface is Tunnel1, which is the MDT GRE tunnel.

R2#sh ip mroute vrf A 225.5.5.5 | beg \(
(*, 225.5.5.5), 00:11:16/00:03:03, RP 9.9.9.9, flags: S
  Incoming interface: FastEthernet2/0, RPF nbr 10.0.12.1
  Outgoing interface list:
    Tunnel2, Forward/Sparse, 00:11:16/00:03:03

(9.9.9.9, 225.5.5.5), 00:02:23/00:01:06, flags: T
  Incoming interface: FastEthernet2/0, RPF nbr 10.0.12.1
  Outgoing interface list:
    Tunnel2, Forward/Sparse, 00:02:23/00:03:03

R2 shows Tunnel2 is in the OIL which again is the MDT GRE tunnel.

So our initial draft-rosen configuration is working as expected. Join me for part two where I’ll go a lot deeper into how those multicast packets are going from A to B

Tagged with:  

Cisco ME3400 notes

On May 28, 2013, in CCIE, by Darren

The current CCIE SP exams focuses on the metro line of ME3400 switches. For the most part its just another switch. There are a couple of differences which I wanted to put here for my own notes. I’ve spent a lot of time on my L3, so I really need more of these L2 notes.

For references I’m doing this all on a ME-3400G-2CS-A running.

Switch#sh ver | include IOS
Cisco IOS Software, ME340x Software (ME340x-METROIPACCESS-M), Version 12.2(52)SE, RELEASE SOFTWARE

This switch at the start has a blank config.

  • There are three port types: UNI, NNI, and ENI. By default this particular model comes configured like so:
Switch#sh port-type
Port      Name               Vlan       Port Type
--------- ------------------ ---------- ----------------------------
Gi0/1                        1          User Network Interface           (uni)
Gi0/2                        1          User Network Interface           (uni)
Gi0/3                        1          Network Node Interface           (nni)
Gi0/4                        1          Network Node Interface           (nni)

Out of interest, the two default UNI ports are administratively shut, while the NNIs are not:

Switch#sh int status

Port      Name               Status       Vlan       Duplex  Speed Type
Gi0/1                        disabled     1            auto   auto Not Present
Gi0/2                        disabled     1            auto   auto Not Present
Gi0/3                        notconnect   1            auto   auto Not Present
Gi0/4                        notconnect   1            auto   auto Not Present

Let’s no shut interface gi0/1 and stick it in vlan 3:

Switch(config)#int gi0/1
Switch(config-if)#no shut
Switch(config-if)#switch access vlan 3
% Access VLAN does not exist. Creating vlan 3
Switch(config-if)#end
  • Notice that STP and CDP do not run on this uni port:
Switch#sh span interface gi0/1 detail
no spanning tree info available for GigabitEthernet0/1

Switch#show cdp int
GigabitEthernet0/3 is down, line protocol is down
  Encapsulation ARPA
  Sending CDP packets every 60 seconds
  Holdtime is 180 seconds
GigabitEthernet0/4 is down, line protocol is down
  Encapsulation ARPA
  Sending CDP packets every 60 seconds
  Holdtime is 180 seconds
  • UNI supports etherchannel on only, no LACP or PAgP:
Switch(config-if)#channel-group 1 mode ?
  on  Enable Etherchannel only

Let’s change this to an NNI port to see what options we get:

Switch(config-if)#int gi0/1
Switch(config-if)#port-type nni
Switch(config-if)#channel-group 2 mode ?
  active     Enable LACP unconditionally
  auto       Enable PAgP only if a PAgP device is detected
  desirable  Enable PAgP unconditionally
  on         Enable Etherchannel only
  passive    Enable LACP only if a LACP device is detected
Switch#show cdp interface
GigabitEthernet0/1 is up, line protocol is up
  Encapsulation ARPA
  Sending CDP packets every 60 seconds
  Holdtime is 180 seconds
GigabitEthernet0/3 is down, line protocol is down
  Encapsulation ARPA
  Sending CDP packets every 60 seconds
  Holdtime is 180 seconds
GigabitEthernet0/4 is down, line protocol is down
  Encapsulation ARPA
  Sending CDP packets every 60 seconds
  Holdtime is 180 seconds

 Switch#sh span int gi0/1 detail
 Port 56 (Port-channel1) of VLAN0003 is designated forwarding
   Port path cost 19, Port priority 128, Port Identifier 128.56.
   Designated root has priority 32771, address 10bd.1804.7900
   Designated bridge has priority 32771, address 10bd.1804.7900
   Designated port id is 128.56, designated path cost 0
   Timers: message age 0, forward delay 0, hold 0
   Number of transitions to forwarding state: 1
   Link type is point-to-point by default
   BPDU: sent 30, received 0
  • ENI acts like a UNI port, but gives you STP, CDP, and LACP/PAgP. However all of this is disabled by default
Switch(config)#int gi0/1
Switch(config-if)#port-type eni
Switch(config-if)#channel-group 3 mode ?
  active     Enable LACP unconditionally
  auto       Enable PAgP only if a PAgP device is detected
  desirable  Enable PAgP unconditionally
  on         Enable Etherchannel only
  passive    Enable LACP only if a LACP device is detected

Switch(config-if)#cdp ?
  enable  Enable CDP on interface

Switch(config-if)#spanning-tree
  • The spanning-tree mode is rapid by default. But this can be changed. As noted before I have not changed the mode of spanning tree yet:
Switch#sh span | include protocol
  Spanning tree enabled protocol rstp

This can be changed:

Switch(config)#spanning-tree mode ?
  mst         Multiple spanning tree mode
  pvst        Per-Vlan spanning tree mode
  rapid-pvst  Per-Vlan rapid spanning tree mode
  • VTP is not supported:
Switch#sh vtp ?
% Unrecognized command
  • DTP is not supported. Either you run a static trunk or static access port. No DTP (Which I usually disable anyway)
Switch#sh int gi0/1 switchport  | include Nego
Negotiation of Trunking: Off
  • UNI and ENI ports cannot speak to each other by default. Only to an NNI port. This is similar to private-vlans (in particular, isolated private-vlans)
  • Note that private-vlans are still supported as a separate technology.

To view the type, check show vlan uni-vlan. When its empty its the default ‘isolated’ type (very annoying that it doesn’t show:

Switch#sh vlan uni-vlan

VLAN Type              Ports
---- ----------------- -------------------------------------------------------

You can change this to act like a community private vlan. This is so ENI/UNI ports in the same vlan can speak to each other, as well as the NNI port in the same vlan:

Switch#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
Switch(config)#vlan 3
Switch(config-vlan)#uni-vlan community
Switch(config-vlan)#end
Switch#
*Mar  1 00:33:02.706: %SYS-5-CONFIG_I: Configured from console by console
Switch#
Switch#sh vlan uni-vlan

VLAN Type              Ports
---- ----------------- -------------------------------------------------------
3    UNI community     Gi0/1
  • ISL is not supported. i.e. when configuring a trunk, you just need switchport mode trunk. No need to specify which type when there is only a single type.
  • These are the SDM types with this particular model:
Switch(config)#sdm prefer ?
  default             Default bias
  dual-ipv4-and-ipv6  Support both IPv4 and IPv6
  layer-2             No routing
Switch#sh sdm prefer
 The current template is "default" template.
 The selected template optimizes the resources in
 the switch to support this level of features for
 8 routed interfaces and 1024 VLANs.

  number of unicast mac addresses:                  5K
  number of IPv4 IGMP groups + multicast routes:    1K
  number of IPv4 unicast routes:                    9K
    number of directly-connected IPv4 hosts:        5K
    number of indirect IPv4 routes:                 4K
  number of IPv4 policy based routing aces:         0.5K
  number of IPv4/MAC qos aces:                      0.5K
  number of IPv4/MAC security aces:                 1K
  • MPLS is not supported:
Switch(config)#mpls ?
% Unrecognized command
  • MLS QoS is not supported, but MQC QoS is supported.
  • Pretty much everything else is like a regular 3560/3750 switch
Tagged with:  

© 2009-2014 Darren O'Connor All Rights Reserved