In part 1, we left off with verifying that multicast traffic was going from R9 to R12. Let’s go deep at how those packets move through the network.
default MDT traffic flow
In the CE network, this traffic would be standard multicast. Let’s verify by taking a packet capture. I won’t show the very first and very last hop as that is standard CE multicast.
R2 to R3 link. R2 encapsulates the entire CE multicast packet into a GRE tunnel. This GRE tunnel has a source address of 184.108.40.206, R2’s loopback, while the destination address is 220.127.116.11. This is the defaul MDT address we’ve chosen for this customer.
Problems we can assume from capture above
There is one scalability issue staring at us in the face. When R2 encapsulates this packet and sends it to the group address of 18.104.22.168, this packet will make its way to R4. R4 currently has no interested receivers, but the PE router itself is still part of that particular multicast group. We can see R3 is sending GRE packets off to R4 right now.
R3 to R4 link:
In our test network this isn’t creating that much of an issue, but imagine that a source connected to R9 is sending 1080p video over multicast and this customer is connected to 100 different PE routers. If this stream is only between three offices, that stream will still be replicated to ALL PE routers involved in the same multicast VRF. To make this a bit more scalable we can instruct the routers to use the data MDT which I’ll expand on in the next section.
Another issue that we can get from above is that across the ISP core, these packets are being forwarded via multicast. We already went over the fact that the ISP needs to run multicast, but its important to note that these packets are NOT getting label switched. The time spent on your wonderful RSVP-TE with FRR MPLS network has no effect on these draft rosen packets.
To get around the scalability issue of the previous MDT (called the default MDT) we can tell the routers to switch to a new group if the amount of packets gets over a certain threshold. This second MDT group will have a new address, and PE routers who actually have live receivers will join the new group. Those that have no receivers will not join the new group. The encapsulated CE traffic will only flow through the new group. Let’s configure this and check traffic flow again.
On R2 I’ll add the following config to the existing config on R2:
vrf definition A address-family ipv4 mdt data 22.214.171.124 0.0.0.0 threshold 1 mdt data threshold 1
If the stream goes above 1Kb/s, switch the encapsulation of the customer CE multicast traffic to GRE group address 126.96.36.199 – Note that the data MDT config only needs to be configured on the PE router attached to the muticast source site. If all sites have potential sources, all PE routers would need to be configured that way.
I’ll now ensure R9 is sending larger ICMP multicast packets to force a switchover.
data MDT traffic flow
R4 has no receivers, and so has not joined this data MDT. We can check the mroute table on R3 to note that it’s only sending to the group address 188.8.131.52 out its interface off to R5:
R3#sh ip mroute 184.108.40.206 | beg \( (*, 220.127.116.11), 00:02:40/00:02:46, RP 18.104.22.168, flags: S Incoming interface: Null, RPF nbr 0.0.0.0 Outgoing interface list: GigabitEthernet3/0, Forward/Sparse, 00:02:40/00:02:46 (22.214.171.124, 126.96.36.199), 00:02:36/00:02:57, flags: T Incoming interface: GigabitEthernet1/0, RPF nbr 10.0.23.2 Outgoing interface list: GigabitEthernet3/0, Forward/Sparse, 00:02:36/00:02:50
A packet capture on the R3-R4 link shows no traffic destined to 188.8.131.52 on that link.
How does R5 know what address to join? How did it know that it had to join 184.108.40.206? This part is signaled through a UDP control message. Once the multicast stream hits the threshold, R2 will send a UDP control message to the default mdt group:
Inside this frame is a TLV which specifies the S,G entry plus the data MDT group to be used. We can infer from this that a switchover to the data MDT is only possible with S,G entries. *,G and bidir PIM groups will always use the default MDT, regardless of bandwidth.
R2 will send this update via a UDP frame destined to 220.127.116.11, the ALL-PIM multicast address. As this is still inside the customers multicast network it is encapsulated in the default MDT itself. You can see this from the packet capture above.
Let’s verify this on the three PE routers:
R2#show ip pim vrf A mdt send MDT-data send list for VRF: A (source, group) MDT-data group/num ref_count (18.104.22.168, 22.214.171.124) 126.96.36.199 1
R2 is saying that it has a source for the 188.8.131.52 group. Any PE who has receivers, please join 184.108.40.206
R5#show ip pim vrf A mdt receive detail | beg \[ Joined MDT-data [group/mdt number : source] uptime/expires for VRF: A [220.127.116.11 : 0.0.0.0] 00:09:22/00:01:36 (18.104.22.168, 22.214.171.124), 00:51:36/00:01:36/00:01:36, OIF count: 1, flags: TY
R5 has an interested listener and has joined the new data MDT.
R4 has cached this information, but as it has no interested listeners it has not joined the new data MDT. If we added a new listener connected to R4, it would join the new group. Likewise R5 will remove itself from the group if it has no more interested listeners. R2 will send an MDT UDP update message once per minute to refresh all the other PE routers.
Note that when using SSM in the ISP core, BGP will be used to signal the above information.
Join me for part three where we will go over draft rosen inter-op between IOS and Junos