Tag Archives: default

Multicast over L3VPN – Part 3 of X – Junos and IOS interop

Welcome to part three of many. In this post we’ll go over a Junos-IOS draft rosen interop. This entire time we have been doing draft rosen 6, otherwise known as ASM draft rosen. We could also do draft rosen 7, which is SSM. For now we’ll stick with draft rosen 6.

Same topology as last time. I’ve now added a Junos PE and CE router into the mix.
multicastiosjunos Multicast over L3VPN   Part 3 of X   Junos and IOS interop

Junos PE core multicast config

I’m not going to show the basic IGP and MPLS config as that’s already been covered before.

darreno@J1> show configuration protocols pim
rp {
    static {
        address 3.3.3.3;
    }
}
interface ge-0/0/1.0;
interface lo0.0;

Junos VRF PIM config

darreno@J1> show configuration routing-instances
A {
    instance-type vrf;
    interface ge-0/0/2.0;
    interface lo0.1;
    route-distinguisher 100:1;
    vrf-target target:100:1;
    protocols {
        ospf {
            export EXPORT;
            area 0.0.0.0 {
                interface ge-0/0/2.0;
                interface lo0.1;
            }
        }
        pim {
            vpn-group-address 239.10.10.10;
            interface ge-0/0/2.0;
            interface lo0.1;
        }
    }
}

I’ve configured the default MDT group within the VRF config much like IOS. In order that Junos and IOS be compatible, I have to have a loopback with the same address as the local BGP peering loopback and that needs to be in the VRF. I’ve added lo0.1 to the above config, and this is the address configured:

darreno@J1> show configuration interfaces lo0
unit 0 {
    family inet {
        address 77.77.77.77/32;
    }
}
unit 1 {
    family inet {
        address 77.77.77.77/32;
    }
}

I need to add the MDT group to my MP-BGP config:

darreno@J1> show configuration protocols bgp
group L3MVPN {
    local-address 77.77.77.77;
    family inet-vpn {
        unicast;
        multicast;
    }
    family inet-mdt {
        signaling;
    }
    peer-as 100;
    neighbor 2.2.2.2;
    neighbor 4.4.4.4;
    neighbor 5.5.5.5;
}

The CE router is configured as normal enterprise multicast. R9 is still advertising itself as a BSR and RP candidate.

Verification

JR1 should see the three other PE routers plus the locally attached CE router. Note that Junos will use an mt interface (multicast tunnel)

darreno@J1> show pim neighbors instance A
B = Bidirectional Capable, G = Generation Identifier
H = Hello Option Holdtime, L = Hello Option LAN Prune Delay,
P = Hello Option DR Priority, T = Tracking Bit

Instance: PIM.A
Interface           IP V Mode        Option       Uptime Neighbor addr
ge-0/0/2.0           4 2             HPLGT      00:30:30 10.0.78.8
mt-0/0/0.32768       4 2             HPG        00:15:40 2.2.2.2
mt-0/0/0.32768       4 2             HPG        00:00:27 4.4.4.4
mt-0/0/0.32768       4 2             HPG        00:15:40 5.5.5.5

JR2, our new CE, should see R9 as the RP:

darreno@J2> show pim rps
Instance: PIM.master

Address family INET
RP address      Type        Mode   Holdtime Timeout Groups Group prefixes
9.9.9.9         bootstrap   sparse       25      19      3 224.0.0.0/4

Final confirmation

Let’s join the gorup 225.5.5.5 on JR2:

darreno@J2> show configuration protocols igmp
interface ge-0/0/2.0 {
    static {
        group 225.5.5.5;
    }
}

NOTE: Junos will not respond to a ping sent to a multicast group. In order for your router to respond, you can add protocols sap (group_IP)

R9#ping 225.5.5.5 repeat 1

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 225.5.5.5, timeout is 2 seconds:

Reply to request 0 from 10.0.78.8, 28 ms
Reply to request 0 from 10.0.128.12, 56 ms
Reply to request 0 from 10.0.128.12, 52 ms
Reply to request 0 from 10.0.78.8, 40 ms

Both JR2 and R12 are responding as expected.

End of draft rosen

Draft rosen certainly works. However as noted in part 1, multicast is required in the ISP core and this is not a very scalable technology, even when using the data MDT. One of the biggest issues you have is that for every multicast customer, your PE routers will maintain tunnels and PIM adjacencies with each other. If you have 10 PE routers with 4 multicast customers, that’s an awful lot of state running in your core. In parts four and onwards I’ll cover some of the newer ways that we can do MPLS L3 VPN Multicast.

Multicast over L3VPN – Part 2 of X – draft rosen traffic flow and the data MDT

In part 1, we left off with verifying that multicast traffic was going from R9 to R12. Let’s go deep at how those packets move through the network.

Let’s remind ourselves of the network we are working on
MDT 2 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT
R9 is sending a multicast packet off to group 225.5.5.5 which R12 has joined.

default MDT traffic flow

In the CE network, this traffic would be standard multicast. Let’s verify by taking a packet capture. I won’t show the very first and very last hop as that is standard CE multicast.

R1 to R2 link. This is a standard multicast packet going form the CE edge to the PE router.
Screen Shot 2013 06 09 at 11.18.11 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT

R2 to R3 link. R2 encapsulates the entire CE multicast packet into a GRE tunnel. This GRE tunnel has a source address of 2.2.2.2, R2′s loopback, while the destination address is 239.10.10.10. This is the defaul MDT address we’ve chosen for this customer.
Screen Shot 2013 06 09 at 11.21.23 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT

R3 to R5 link. Continuing through the ISP network, we have that same GRE packet going through as a multicast frame destined to 239.10.10.10.
Screen Shot 2013 06 09 at 11.23.07 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT

R5 to R8 link. R5 will remove the GRE header and forward the original CE multicast packet per standard multicast behaviour.
Screen Shot 2013 06 09 at 11.24.50 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT

Problems we can assume from capture above

There is one scalability issue staring at us in the face. When R2 encapsulates this packet and sends it to the group address of 239.10.10.10, this packet will make its way to R4. R4 currently has no interested receivers, but the PE router itself is still part of that particular multicast group. We can see R3 is sending GRE packets off to R4 right now.
R3 to R4 link:
Screen Shot 2013 06 09 at 11.35.12 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT
In our test network this isn’t creating that much of an issue, but imagine that a source connected to R9 is sending 1080p video over multicast and this customer is connected to 100 different PE routers. If this stream is only between three offices, that stream will still be replicated to ALL PE routers involved in the same multicast VRF. To make this a bit more scalable we can instruct the routers to use the data MDT which I’ll expand on in the next section.

Another issue that we can get from above is that across the ISP core, these packets are being forwarded via multicast. We already went over the fact that the ISP needs to run multicast, but its important to note that these packets are NOT getting label switched. The time spent on your wonderful RSVP-TE with FRR MPLS network has no effect on these draft rosen packets.

Data MDT

To get around the scalability issue of the previous MDT (called the default MDT) we can tell the routers to switch to a new group if the amount of packets gets over a certain threshold. This second MDT group will have a new address, and PE routers who actually have live receivers will join the new group. Those that have no receivers will not join the new group. The encapsulated CE traffic will only flow through the new group. Let’s configure this and check traffic flow again.

On R2 I’ll add the following config to the existing config on R2:

vrf definition A
 address-family ipv4
  mdt data 239.11.11.11 0.0.0.0 threshold 1
  mdt data threshold 1

If the stream goes above 1Kb/s, switch the encapsulation of the customer CE multicast traffic to GRE group address 239.11.11.11 – Note that the data MDT config only needs to be configured on the PE router attached to the muticast source site. If all sites have potential sources, all PE routers would need to be configured that way.

I’ll now ensure R9 is sending larger ICMP multicast packets to force a switchover.

data MDT traffic flow

R2 has switched over to the new data MDT and is sending encapsulated frames to 239.11.11.11:
Screen Shot 2013 06 09 at 11.51.02 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT

R4 has no receivers, and so has not joined this data MDT. We can check the mroute table on R3 to note that it’s only sending to the group address 239.11.11.11 out its interface off to R5:

R3#sh ip mroute 239.11.11.11 | beg \(
(*, 239.11.11.11), 00:02:40/00:02:46, RP 3.3.3.3, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet3/0, Forward/Sparse, 00:02:40/00:02:46

(2.2.2.2, 239.11.11.11), 00:02:36/00:02:57, flags: T
  Incoming interface: GigabitEthernet1/0, RPF nbr 10.0.23.2
  Outgoing interface list:
    GigabitEthernet3/0, Forward/Sparse, 00:02:36/00:02:50

A packet capture on the R3-R4 link shows no traffic destined to 239.11.11.11 on that link.

How does R5 know what address to join? How did it know that it had to join 239.11.11.11? This part is signaled through a UDP control message. Once the multicast stream hits the threshold, R2 will send a UDP control message to the default mdt group:
Screen Shot 2013 06 19 at 10.05.25 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT
Inside this frame is a TLV which specifies the S,G entry plus the data MDT group to be used. We can infer from this that a switchover to the data MDT is only possible with S,G entries. *,G and bidir PIM groups will always use the default MDT, regardless of bandwidth.
R2 will send this update via a UDP frame destined to 224.0.0.13, the ALL-PIM multicast address. As this is still inside the customers multicast network it is encapsulated in the default MDT itself. You can see this from the packet capture above.
Let’s verify this on the three PE routers:

R2#show ip pim vrf A mdt send
MDT-data send list for VRF: A
  (source, group)                     MDT-data group/num   ref_count
  (9.9.9.9, 225.5.5.5)                239.11.11.11         1

R2 is saying that it has a source for the 225.5.5.5 group. Any PE who has receivers, please join 239.11.11.1

R5#show ip pim vrf A mdt receive detail | beg \[
Joined MDT-data [group/mdt number : source]  uptime/expires for VRF: A
 [239.11.11.11 : 0.0.0.0]  00:09:22/00:01:36
  (9.9.9.9, 225.5.5.5), 00:51:36/00:01:36/00:01:36, OIF count: 1, flags: TY

R5 has an interested listener and has joined the new data MDT.

R4 has cached this information, but as it has no interested listeners it has not joined the new data MDT. If we added a new listener connected to R4, it would join the new group. Likewise R5 will remove itself from the group if it has no more interested listeners. R2 will send an MDT UDP update message once per minute to refresh all the other PE routers.

Note that when using SSM in the ISP core, BGP will be used to signal the above information.

Join me for part three where we will go over draft rosen inter-op between IOS and Junos

Multicast over L3VPN – Part 1 of X – draft rosen concept and configuration

mVPN, or Multicast VPN, is a pretty big subject. I’d like to go over a lot of details and so this will become a series of posts. How many I don’t know yet, which is why I’m using X in the title.

I’ll try and start with the more basic types of mVPN and then move onto the more complicated stuff. I’m most certainly not going to go over the basics of multicast or l3vpn themselves otherwise this series would stretch to 20 posts long.

Let’s take the following topology into consideration:
Screen Shot 2013 06 05 at 22.00.38 Multicast over L3VPN   Part 1 of X   draft rosen concept and configuration
R2, R3, R4, and R5 are the ISP routers providing a L3VPN service to Mr Customer. R2, R4, and R5 are the PE routers while R3 is a P router. The core network is currently running OSPF and LDP. The PE routers are exchanging VPNv4 routes via MP-BGP.

Now Mr. Customer would like to start running multicast within their network. These multicast packets will not be able to run natively over the ISP core, as the source address could be a private address in the customer’s VPN. Therefore the P routers would never know the source. Also if we did try to run multicast natively, we would have a problem in that no two customers would be able to run the same group.

One solution is for the customer to run GRE tunnels between his WAN routers. i.e. he would manually set up a full mesh of tunnels from R1 to R6, R1 to R7, and R1 to R8. He’ll also need to ensure all his other WAN edge routers have tunnels to all others. This works, but it’s a lot of work and upkeep for the customer to do.
MDT 1 Multicast over L3VPN   Part 1 of X   draft rosen concept and configuration

Why can’t this ISP sell this as an added service?

The first ISP method of doing this is called draft rosen. This draft has now expanded into RFC 6037, but is still called draft rosen by many.

Draft rosen, in a nutshell, makes the PE routers do the hard work. Instead of manually configuring GRE tunnels from CE to CE, the PE routers will automatically set up GRE tunnels. In order to set up a GRE tunnel, you need a source and destination address. In draft rosen the source address will be the PE’s loopback address, while the destination address will be a multicast address dedicated to the customer. Hang on, multicast destination? Yes that’s correct. If we look at customer 1 for example, we can configure the multicast group address of 239.10.10.10 and dedicate it to them. Each PE router will attempt to form GRE tunnels with other PE routers. Each PE router also joins the group 239.10.10.10 in the ISP core. This way they all send GRE tunneled traffic from themselves which will end up at all other PE routers thanks to destination address being a multicast address. These tunnels create whats called the MDT or multicast distribution tree.
MDT 2 Multicast over L3VPN   Part 1 of X   draft rosen concept and configuration

This does mean a few things though. For one we need to enable multicast in the ISP core network. In order for the PE routers to create the MDT to make their GRE tunnels. Customer multicast traffic will then be encapsulated in those GRE packets and sent off to the other PE routers in the same mVPN.

Configuration

Let’s take a look at configuring this.
R3:

ip multicast-routing
!
interface Loopback0
 ip address 3.3.3.3 255.255.255.255
 ip pim sparse-mode
 ip ospf 1 area 0
!
interface GigabitEthernet1/0
 ip address 10.0.23.3 255.255.255.0
 ip pim sparse-mode
 ip ospf 1 area 0
 mpls ip
!
interface GigabitEthernet2/0
 ip address 10.0.34.3 255.255.255.0
 ip pim sparse-mode
 ip ospf 1 area 0
 mpls ip
!
interface GigabitEthernet3/0
 ip address 10.0.35.3 255.255.255.0
 ip pim sparse-mode
 ip ospf 1 area 0
 mpls ip
!
ip pim rp-address 3.3.3.3

To keep things simple, I’m using a static RP address for the core. In the real-world this could be static, auto-rp, or BSR. Either of the three could also use anycast and MSDP if you so wished.

The MDT group will be configured on the PE routers. Each VRF will need to have multicast enabled, and then an MDT address defined for that VRF. PIM will be enabled on the CE-facing interface, and it also needs to be enabled on the loopback interface which it’s using for it’s VPNv4 peering. Finally it will need PIM enabled on the core facing interface itself. This is R2′s relevant config:

vrf definition A
 rd 100:1
 route-target export 100:1
 route-target import 100:1
 !
 address-family ipv4
  mdt default 239.10.10.10
 exit-address-family
!
ip multicast-routing
ip multicast-routing vrf A
!
interface Loopback0
 ip address 2.2.2.2 255.255.255.255
 ip pim sparse-mode
 ip ospf 1 area 0
!
interface GigabitEthernet1/0
 ip address 10.0.23.2 255.255.255.0
 ip pim sparse-mode
 ip ospf 1 area 0
 mpls ip
!
interface FastEthernet2/0
 vrf forwarding A
 ip address 10.0.12.2 255.255.255.0
 ip pim sparse-mode
 ip ospf 2 area 0
!
ip pim rp-address 3.3.3.3

R4 and R5 have a similar config.

At this point, multicast has not yet been enabled in the customer network. What we should see in the core is that all three PE routers are sending traffic to 239.10.10.10 – This is the MDT set up. All three will join the *,239.10.10.10 group. Once the PE routers all get traffic from other PEs over this group, they will all join the S,G group directly. We can verify this on R3:

R3#show ip mroute | beg \(
(*, 239.10.10.10), 00:12:46/00:02:45, RP 3.3.3.3, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet1/0, Forward/Sparse, 00:09:12/00:02:45
    GigabitEthernet2/0, Forward/Sparse, 00:12:46/00:02:30
    GigabitEthernet3/0, Forward/Sparse, 00:12:46/00:02:34

(5.5.5.5, 239.10.10.10), 00:12:46/00:03:16, flags: T
  Incoming interface: GigabitEthernet3/0, RPF nbr 10.0.35.5
  Outgoing interface list:
    GigabitEthernet1/0, Forward/Sparse, 00:09:12/00:02:45
    GigabitEthernet2/0, Forward/Sparse, 00:12:46/00:02:37

(4.4.4.4, 239.10.10.10), 00:12:46/00:03:14, flags: T
  Incoming interface: GigabitEthernet2/0, RPF nbr 10.0.34.4
  Outgoing interface list:
    GigabitEthernet1/0, Forward/Sparse, 00:09:12/00:02:45
    GigabitEthernet3/0, Forward/Sparse, 00:12:46/00:02:34

(2.2.2.2, 239.10.10.10), 00:12:46/00:03:01, flags: T
  Incoming interface: GigabitEthernet1/0, RPF nbr 10.0.23.2
  Outgoing interface list:
    GigabitEthernet2/0, Forward/Sparse, 00:12:46/00:02:34
    GigabitEthernet3/0, Forward/Sparse, 00:12:46/00:02:34

The OIL for *,239.10.10.10 is out to all three PE routers. All three are also sources, and so you see three S,G groups, each with a source of their loopbacks. This part is all automatic, so if I add another PE router, I just need to add it to the MDT group, enable PIM, and all the other PE routers will set up new GRE tunnels automatically.

Once the GRE tunnels are setup, the PE routers will form a PIM adjacency automatically over the multipoint tunnel inside the customer’s VRF. Let’s take a look at R2′s PIM neighbours:

R2#sh ip pim vrf A neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
10.0.12.1         FastEthernet2/0          00:03:17/00:01:23 v2    1 / S P G
4.4.4.4           Tunnel2                  00:01:20/00:01:23 v2    1 / S P G
5.5.5.5           Tunnel2                  00:01:49/00:00:53 v2    1 / DR S P G

fa2/0 is the CE-facing interface. Tunnel2 is the multipoint GRE interface going to R4 and R5. Over the tunnel interface R2 has two adjacencies. R5 is elected as the DR (thanks to its highest IP address)

As far as the customer is concerned, they can just run a standard multicast set up. It doesn’t what what version they are running either. I’ll make R9 announce itself as the BSR and RP, and that should filter to all other customer sites.

R9#sh run | inc candidate
ip pim bsr-candidate Loopback0 0
ip pim rp-candidate Loopback0 interval 10

Let’s go to R12 in another site to see if we get the RP mapping information:

R12#sh ip pim rp map
PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4
  RP 9.9.9.9 (?), v2
    Info source: 9.9.9.9 (?), via bootstrap, priority 0, holdtime 150
         Uptime: 00:08:15, expires: 00:02:17

At this point I should be able to join a group on R12 and source traffic from R9. Let’s test this:

R12
interface FastEthernet1/0
 ip igmp join-group 225.5.5.5
R9#ping 225.5.5.5 repeat 2
Type escape sequence to abort.
Sending 2, 100-byte ICMP Echos to 225.5.5.5, timeout is 2 seconds:

Reply to request 0 from 10.0.128.12, 88 ms
Reply to request 1 from 10.0.128.12, 132 ms

The CE network shows regular muticast working. Let’s check the PE routers mroute tables:

R5#sh ip mroute vrf A 225.5.5.5 | beg \(
(*, 225.5.5.5), 00:11:51/00:03:24, RP 9.9.9.9, flags: S
  Incoming interface: Tunnel1, RPF nbr 2.2.2.2
  Outgoing interface list:
    FastEthernet2/0, Forward/Sparse, 00:11:51/00:03:24

(9.9.9.9, 225.5.5.5), 00:02:00/00:01:29, flags: T
  Incoming interface: Tunnel1, RPF nbr 2.2.2.2
  Outgoing interface list:
    FastEthernet2/0, Forward/Sparse, 00:02:00/00:03:28

R5 shows that the incoming RPF interface is Tunnel1, which is the MDT GRE tunnel.

R2#sh ip mroute vrf A 225.5.5.5 | beg \(
(*, 225.5.5.5), 00:11:16/00:03:03, RP 9.9.9.9, flags: S
  Incoming interface: FastEthernet2/0, RPF nbr 10.0.12.1
  Outgoing interface list:
    Tunnel2, Forward/Sparse, 00:11:16/00:03:03

(9.9.9.9, 225.5.5.5), 00:02:23/00:01:06, flags: T
  Incoming interface: FastEthernet2/0, RPF nbr 10.0.12.1
  Outgoing interface list:
    Tunnel2, Forward/Sparse, 00:02:23/00:03:03

R2 shows Tunnel2 is in the OIL which again is the MDT GRE tunnel.

So our initial draft-rosen configuration is working as expected. Join me for part two where I’ll go a lot deeper into how those multicast packets are going from A to B

Good defaults to use on your Cisco devices

There are a number of things that I put into my standard router/switch builds, and I thought I’d share them here. If you have any to add, please do!

service timestamps debug datetime msec localtime show-timezone year
service timestamps log datetime msec localtime show-timezone year

service password-encryption

clock timezone GMT 0
clock summer-time BST recurring last Sun Mar 1:00 last Sun Oct 2:00

no ip domain lookup

no ip ospf name-lookup

line con 0
 exec-timeout 10 0
 logging synchronous
line vty 0 4
 exec-timeout 5 0
 logging synchronous

So what does the above exactly do? Let’s break them down one at a time.

service timestamps debug datetime msec localtime show-timezone year
service timestamps log datetime msec localtime show-timezone year

This tells the router to include the correct timezone, date, year in the log file, down to the very millisecond. Very handy when troubleshooting.

service password-encryption

A no-brainer really. Encrypt your passwords in the config.

clock timezone GMT 0
clock summer-time BST recurring last Sun Mar 1:00 last Sun Oct 2:00

You’ll need to change this to suit your timezone. This correctly tells my devices what timezone they are in, and when to change their clocks. You’ll never need to add or subtract an hour again!

no ip domain lookup

Ever mistyped a command only for the router to try and resolve it for what seems like 5 minutes? This command disables lookups for your mistyped commands.

no ip ospf name-lookup

If you run OSPF and do a show ip ospf neighbor, you’ll notice it sometimes takes forever. Why? What’s happening is that IOS is trying to resolve the neighbor ID’s to a hostname through RDNS. I always want it to be quick, and I also want to know my neighbor ID’s by the ID. This command disables that RDNS lookup.

line con 0
 exec-timeout 30 0
 logging synchronous
line vty 0 4
 exec-timeout 5 0
 logging synchronous

If I’m consoled onto the device, I don’t want to have to keep logging into it because of a timeout. I set this to 30 minutes to ensure this doesn’t happen. You could set this to 0 0 but be careful, this will cause it to NEVER log out (unless the device reboots or something) – This means you could console in, make some changes, come back in 3 months and reconnect that console cable in. You’ll still be connected!
Logging synchronous prevents IOS from logging on the same line you’re currently typing in.