Multicast over L3VPN – Part 2 of X – draft rosen traffic flow and the data MDT

In part 1, we left off with verifying that multicast traffic was going from R9 to R12. Let’s go deep at how those packets move through the network.

Let’s remind ourselves of the network we are working on

R9 is sending a multicast packet off to group which R12 has joined.

default MDT traffic flow

In the CE network, this traffic would be standard multicast. Let’s verify by taking a packet capture. I won’t show the very first and very last hop as that is standard CE multicast.

R1 to R2 link. This is a standard multicast packet going form the CE edge to the PE router.

R2 to R3 link. R2 encapsulates the entire CE multicast packet into a GRE tunnel. This GRE tunnel has a source address of, R2’s loopback, while the destination address is This is the defaul MDT address we’ve chosen for this customer.

R3 to R5 link. Continuing through the ISP network, we have that same GRE packet going through as a multicast frame destined to

R5 to R8 link. R5 will remove the GRE header and forward the original CE multicast packet per standard multicast behaviour.

Problems we can assume from capture above

There is one scalability issue staring at us in the face. When R2 encapsulates this packet and sends it to the group address of, this packet will make its way to R4. R4 currently has no interested receivers, but the PE router itself is still part of that particular multicast group. We can see R3 is sending GRE packets off to R4 right now.
R3 to R4 link:

In our test network this isn’t creating that much of an issue, but imagine that a source connected to R9 is sending 1080p video over multicast and this customer is connected to 100 different PE routers. If this stream is only between three offices, that stream will still be replicated to ALL PE routers involved in the same multicast VRF. To make this a bit more scalable we can instruct the routers to use the data MDT which I’ll expand on in the next section.

Another issue that we can get from above is that across the ISP core, these packets are being forwarded via multicast. We already went over the fact that the ISP needs to run multicast, but its important to note that these packets are NOT getting label switched. The time spent on your wonderful RSVP-TE with FRR MPLS network has no effect on these draft rosen packets.

Data MDT

To get around the scalability issue of the previous MDT (called the default MDT) we can tell the routers to switch to a new group if the amount of packets gets over a certain threshold. This second MDT group will have a new address, and PE routers who actually have live receivers will join the new group. Those that have no receivers will not join the new group. The encapsulated CE traffic will only flow through the new group. Let’s configure this and check traffic flow again.

On R2 I’ll add the following config to the existing config on R2:

vrf definition A
 address-family ipv4
  mdt data threshold 1
  mdt data threshold 1

If the stream goes above 1Kb/s, switch the encapsulation of the customer CE multicast traffic to GRE group address – Note that the data MDT config only needs to be configured on the PE router attached to the muticast source site. If all sites have potential sources, all PE routers would need to be configured that way.

I’ll now ensure R9 is sending larger ICMP multicast packets to force a switchover.

data MDT traffic flow

R2 has switched over to the new data MDT and is sending encapsulated frames to

R4 has no receivers, and so has not joined this data MDT. We can check the mroute table on R3 to note that it’s only sending to the group address out its interface off to R5:

R3#sh ip mroute | beg \(
(*,, 00:02:40/00:02:46, RP, flags: S
  Incoming interface: Null, RPF nbr
  Outgoing interface list:
    GigabitEthernet3/0, Forward/Sparse, 00:02:40/00:02:46

(,, 00:02:36/00:02:57, flags: T
  Incoming interface: GigabitEthernet1/0, RPF nbr
  Outgoing interface list:
    GigabitEthernet3/0, Forward/Sparse, 00:02:36/00:02:50

A packet capture on the R3-R4 link shows no traffic destined to on that link.

How does R5 know what address to join? How did it know that it had to join This part is signaled through a UDP control message. Once the multicast stream hits the threshold, R2 will send a UDP control message to the default mdt group:

Inside this frame is a TLV which specifies the S,G entry plus the data MDT group to be used. We can infer from this that a switchover to the data MDT is only possible with S,G entries. *,G and bidir PIM groups will always use the default MDT, regardless of bandwidth.
R2 will send this update via a UDP frame destined to, the ALL-PIM multicast address. As this is still inside the customers multicast network it is encapsulated in the default MDT itself. You can see this from the packet capture above.
Let’s verify this on the three PE routers:

R2#show ip pim vrf A mdt send
MDT-data send list for VRF: A
  (source, group)                     MDT-data group/num   ref_count
  (,               1

R2 is saying that it has a source for the group. Any PE who has receivers, please join

R5#show ip pim vrf A mdt receive detail | beg \[
Joined MDT-data [group/mdt number : source]  uptime/expires for VRF: A
 [ :]  00:09:22/00:01:36
  (,, 00:51:36/00:01:36/00:01:36, OIF count: 1, flags: TY

R5 has an interested listener and has joined the new data MDT.

R4 has cached this information, but as it has no interested listeners it has not joined the new data MDT. If we added a new listener connected to R4, it would join the new group. Likewise R5 will remove itself from the group if it has no more interested listeners. R2 will send an MDT UDP update message once per minute to refresh all the other PE routers.

Note that when using SSM in the ISP core, BGP will be used to signal the above information.

Join me for part three where we will go over draft rosen inter-op between IOS and Junos

© 2009-2020 Darren O'Connor All Rights Reserved -- Copyright notice by Blog Copyright