Tag Archives: sparse

Multicast over L3VPN – Part 3 of X – Junos and IOS interop

Welcome to part three of many. In this post we’ll go over a Junos-IOS draft rosen interop. This entire time we have been doing draft rosen 6, otherwise known as ASM draft rosen. We could also do draft rosen 7, which is SSM. For now we’ll stick with draft rosen 6.

Same topology as last time. I’ve now added a Junos PE and CE router into the mix.
multicastiosjunos Multicast over L3VPN   Part 3 of X   Junos and IOS interop

Junos PE core multicast config

I’m not going to show the basic IGP and MPLS config as that’s already been covered before.

darreno@J1> show configuration protocols pim
rp {
    static {
        address 3.3.3.3;
    }
}
interface ge-0/0/1.0;
interface lo0.0;

Junos VRF PIM config

darreno@J1> show configuration routing-instances
A {
    instance-type vrf;
    interface ge-0/0/2.0;
    interface lo0.1;
    route-distinguisher 100:1;
    vrf-target target:100:1;
    protocols {
        ospf {
            export EXPORT;
            area 0.0.0.0 {
                interface ge-0/0/2.0;
                interface lo0.1;
            }
        }
        pim {
            vpn-group-address 239.10.10.10;
            interface ge-0/0/2.0;
            interface lo0.1;
        }
    }
}

I’ve configured the default MDT group within the VRF config much like IOS. In order that Junos and IOS be compatible, I have to have a loopback with the same address as the local BGP peering loopback and that needs to be in the VRF. I’ve added lo0.1 to the above config, and this is the address configured:

darreno@J1> show configuration interfaces lo0
unit 0 {
    family inet {
        address 77.77.77.77/32;
    }
}
unit 1 {
    family inet {
        address 77.77.77.77/32;
    }
}

I need to add the MDT group to my MP-BGP config:

darreno@J1> show configuration protocols bgp
group L3MVPN {
    local-address 77.77.77.77;
    family inet-vpn {
        unicast;
        multicast;
    }
    family inet-mdt {
        signaling;
    }
    peer-as 100;
    neighbor 2.2.2.2;
    neighbor 4.4.4.4;
    neighbor 5.5.5.5;
}

The CE router is configured as normal enterprise multicast. R9 is still advertising itself as a BSR and RP candidate.

Verification

JR1 should see the three other PE routers plus the locally attached CE router. Note that Junos will use an mt interface (multicast tunnel)

darreno@J1> show pim neighbors instance A
B = Bidirectional Capable, G = Generation Identifier
H = Hello Option Holdtime, L = Hello Option LAN Prune Delay,
P = Hello Option DR Priority, T = Tracking Bit

Instance: PIM.A
Interface           IP V Mode        Option       Uptime Neighbor addr
ge-0/0/2.0           4 2             HPLGT      00:30:30 10.0.78.8
mt-0/0/0.32768       4 2             HPG        00:15:40 2.2.2.2
mt-0/0/0.32768       4 2             HPG        00:00:27 4.4.4.4
mt-0/0/0.32768       4 2             HPG        00:15:40 5.5.5.5

JR2, our new CE, should see R9 as the RP:

darreno@J2> show pim rps
Instance: PIM.master

Address family INET
RP address      Type        Mode   Holdtime Timeout Groups Group prefixes
9.9.9.9         bootstrap   sparse       25      19      3 224.0.0.0/4

Final confirmation

Let’s join the gorup 225.5.5.5 on JR2:

darreno@J2> show configuration protocols igmp
interface ge-0/0/2.0 {
    static {
        group 225.5.5.5;
    }
}

NOTE: Junos will not respond to a ping sent to a multicast group. In order for your router to respond, you can add protocols sap (group_IP)

R9#ping 225.5.5.5 repeat 1

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 225.5.5.5, timeout is 2 seconds:

Reply to request 0 from 10.0.78.8, 28 ms
Reply to request 0 from 10.0.128.12, 56 ms
Reply to request 0 from 10.0.128.12, 52 ms
Reply to request 0 from 10.0.78.8, 40 ms

Both JR2 and R12 are responding as expected.

End of draft rosen

Draft rosen certainly works. However as noted in part 1, multicast is required in the ISP core and this is not a very scalable technology, even when using the data MDT. One of the biggest issues you have is that for every multicast customer, your PE routers will maintain tunnels and PIM adjacencies with each other. If you have 10 PE routers with 4 multicast customers, that’s an awful lot of state running in your core. In parts four and onwards I’ll cover some of the newer ways that we can do MPLS L3 VPN Multicast.

Multicast over L3VPN – Part 2 of X – draft rosen traffic flow and the data MDT

In part 1, we left off with verifying that multicast traffic was going from R9 to R12. Let’s go deep at how those packets move through the network.

Let’s remind ourselves of the network we are working on
MDT 2 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT
R9 is sending a multicast packet off to group 225.5.5.5 which R12 has joined.

default MDT traffic flow

In the CE network, this traffic would be standard multicast. Let’s verify by taking a packet capture. I won’t show the very first and very last hop as that is standard CE multicast.

R1 to R2 link. This is a standard multicast packet going form the CE edge to the PE router.
Screen Shot 2013 06 09 at 11.18.11 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT

R2 to R3 link. R2 encapsulates the entire CE multicast packet into a GRE tunnel. This GRE tunnel has a source address of 2.2.2.2, R2′s loopback, while the destination address is 239.10.10.10. This is the defaul MDT address we’ve chosen for this customer.
Screen Shot 2013 06 09 at 11.21.23 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT

R3 to R5 link. Continuing through the ISP network, we have that same GRE packet going through as a multicast frame destined to 239.10.10.10.
Screen Shot 2013 06 09 at 11.23.07 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT

R5 to R8 link. R5 will remove the GRE header and forward the original CE multicast packet per standard multicast behaviour.
Screen Shot 2013 06 09 at 11.24.50 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT

Problems we can assume from capture above

There is one scalability issue staring at us in the face. When R2 encapsulates this packet and sends it to the group address of 239.10.10.10, this packet will make its way to R4. R4 currently has no interested receivers, but the PE router itself is still part of that particular multicast group. We can see R3 is sending GRE packets off to R4 right now.
R3 to R4 link:
Screen Shot 2013 06 09 at 11.35.12 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT
In our test network this isn’t creating that much of an issue, but imagine that a source connected to R9 is sending 1080p video over multicast and this customer is connected to 100 different PE routers. If this stream is only between three offices, that stream will still be replicated to ALL PE routers involved in the same multicast VRF. To make this a bit more scalable we can instruct the routers to use the data MDT which I’ll expand on in the next section.

Another issue that we can get from above is that across the ISP core, these packets are being forwarded via multicast. We already went over the fact that the ISP needs to run multicast, but its important to note that these packets are NOT getting label switched. The time spent on your wonderful RSVP-TE with FRR MPLS network has no effect on these draft rosen packets.

Data MDT

To get around the scalability issue of the previous MDT (called the default MDT) we can tell the routers to switch to a new group if the amount of packets gets over a certain threshold. This second MDT group will have a new address, and PE routers who actually have live receivers will join the new group. Those that have no receivers will not join the new group. The encapsulated CE traffic will only flow through the new group. Let’s configure this and check traffic flow again.

On R2 I’ll add the following config to the existing config on R2:

vrf definition A
 address-family ipv4
  mdt data 239.11.11.11 0.0.0.0 threshold 1
  mdt data threshold 1

If the stream goes above 1Kb/s, switch the encapsulation of the customer CE multicast traffic to GRE group address 239.11.11.11 – Note that the data MDT config only needs to be configured on the PE router attached to the muticast source site. If all sites have potential sources, all PE routers would need to be configured that way.

I’ll now ensure R9 is sending larger ICMP multicast packets to force a switchover.

data MDT traffic flow

R2 has switched over to the new data MDT and is sending encapsulated frames to 239.11.11.11:
Screen Shot 2013 06 09 at 11.51.02 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT

R4 has no receivers, and so has not joined this data MDT. We can check the mroute table on R3 to note that it’s only sending to the group address 239.11.11.11 out its interface off to R5:

R3#sh ip mroute 239.11.11.11 | beg \(
(*, 239.11.11.11), 00:02:40/00:02:46, RP 3.3.3.3, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet3/0, Forward/Sparse, 00:02:40/00:02:46

(2.2.2.2, 239.11.11.11), 00:02:36/00:02:57, flags: T
  Incoming interface: GigabitEthernet1/0, RPF nbr 10.0.23.2
  Outgoing interface list:
    GigabitEthernet3/0, Forward/Sparse, 00:02:36/00:02:50

A packet capture on the R3-R4 link shows no traffic destined to 239.11.11.11 on that link.

How does R5 know what address to join? How did it know that it had to join 239.11.11.11? This part is signaled through a UDP control message. Once the multicast stream hits the threshold, R2 will send a UDP control message to the default mdt group:
Screen Shot 2013 06 19 at 10.05.25 Multicast over L3VPN   Part 2 of X   draft rosen traffic flow and the data MDT
Inside this frame is a TLV which specifies the S,G entry plus the data MDT group to be used. We can infer from this that a switchover to the data MDT is only possible with S,G entries. *,G and bidir PIM groups will always use the default MDT, regardless of bandwidth.
R2 will send this update via a UDP frame destined to 224.0.0.13, the ALL-PIM multicast address. As this is still inside the customers multicast network it is encapsulated in the default MDT itself. You can see this from the packet capture above.
Let’s verify this on the three PE routers:

R2#show ip pim vrf A mdt send
MDT-data send list for VRF: A
  (source, group)                     MDT-data group/num   ref_count
  (9.9.9.9, 225.5.5.5)                239.11.11.11         1

R2 is saying that it has a source for the 225.5.5.5 group. Any PE who has receivers, please join 239.11.11.1

R5#show ip pim vrf A mdt receive detail | beg \[
Joined MDT-data [group/mdt number : source]  uptime/expires for VRF: A
 [239.11.11.11 : 0.0.0.0]  00:09:22/00:01:36
  (9.9.9.9, 225.5.5.5), 00:51:36/00:01:36/00:01:36, OIF count: 1, flags: TY

R5 has an interested listener and has joined the new data MDT.

R4 has cached this information, but as it has no interested listeners it has not joined the new data MDT. If we added a new listener connected to R4, it would join the new group. Likewise R5 will remove itself from the group if it has no more interested listeners. R2 will send an MDT UDP update message once per minute to refresh all the other PE routers.

Note that when using SSM in the ISP core, BGP will be used to signal the above information.

Join me for part three where we will go over draft rosen inter-op between IOS and Junos

Multicast over L3VPN – Part 1 of X – draft rosen concept and configuration

mVPN, or Multicast VPN, is a pretty big subject. I’d like to go over a lot of details and so this will become a series of posts. How many I don’t know yet, which is why I’m using X in the title.

I’ll try and start with the more basic types of mVPN and then move onto the more complicated stuff. I’m most certainly not going to go over the basics of multicast or l3vpn themselves otherwise this series would stretch to 20 posts long.

Let’s take the following topology into consideration:
Screen Shot 2013 06 05 at 22.00.38 Multicast over L3VPN   Part 1 of X   draft rosen concept and configuration
R2, R3, R4, and R5 are the ISP routers providing a L3VPN service to Mr Customer. R2, R4, and R5 are the PE routers while R3 is a P router. The core network is currently running OSPF and LDP. The PE routers are exchanging VPNv4 routes via MP-BGP.

Now Mr. Customer would like to start running multicast within their network. These multicast packets will not be able to run natively over the ISP core, as the source address could be a private address in the customer’s VPN. Therefore the P routers would never know the source. Also if we did try to run multicast natively, we would have a problem in that no two customers would be able to run the same group.

One solution is for the customer to run GRE tunnels between his WAN routers. i.e. he would manually set up a full mesh of tunnels from R1 to R6, R1 to R7, and R1 to R8. He’ll also need to ensure all his other WAN edge routers have tunnels to all others. This works, but it’s a lot of work and upkeep for the customer to do.
MDT 1 Multicast over L3VPN   Part 1 of X   draft rosen concept and configuration

Why can’t this ISP sell this as an added service?

The first ISP method of doing this is called draft rosen. This draft has now expanded into RFC 6037, but is still called draft rosen by many.

Draft rosen, in a nutshell, makes the PE routers do the hard work. Instead of manually configuring GRE tunnels from CE to CE, the PE routers will automatically set up GRE tunnels. In order to set up a GRE tunnel, you need a source and destination address. In draft rosen the source address will be the PE’s loopback address, while the destination address will be a multicast address dedicated to the customer. Hang on, multicast destination? Yes that’s correct. If we look at customer 1 for example, we can configure the multicast group address of 239.10.10.10 and dedicate it to them. Each PE router will attempt to form GRE tunnels with other PE routers. Each PE router also joins the group 239.10.10.10 in the ISP core. This way they all send GRE tunneled traffic from themselves which will end up at all other PE routers thanks to destination address being a multicast address. These tunnels create whats called the MDT or multicast distribution tree.
MDT 2 Multicast over L3VPN   Part 1 of X   draft rosen concept and configuration

This does mean a few things though. For one we need to enable multicast in the ISP core network. In order for the PE routers to create the MDT to make their GRE tunnels. Customer multicast traffic will then be encapsulated in those GRE packets and sent off to the other PE routers in the same mVPN.

Configuration

Let’s take a look at configuring this.
R3:

ip multicast-routing
!
interface Loopback0
 ip address 3.3.3.3 255.255.255.255
 ip pim sparse-mode
 ip ospf 1 area 0
!
interface GigabitEthernet1/0
 ip address 10.0.23.3 255.255.255.0
 ip pim sparse-mode
 ip ospf 1 area 0
 mpls ip
!
interface GigabitEthernet2/0
 ip address 10.0.34.3 255.255.255.0
 ip pim sparse-mode
 ip ospf 1 area 0
 mpls ip
!
interface GigabitEthernet3/0
 ip address 10.0.35.3 255.255.255.0
 ip pim sparse-mode
 ip ospf 1 area 0
 mpls ip
!
ip pim rp-address 3.3.3.3

To keep things simple, I’m using a static RP address for the core. In the real-world this could be static, auto-rp, or BSR. Either of the three could also use anycast and MSDP if you so wished.

The MDT group will be configured on the PE routers. Each VRF will need to have multicast enabled, and then an MDT address defined for that VRF. PIM will be enabled on the CE-facing interface, and it also needs to be enabled on the loopback interface which it’s using for it’s VPNv4 peering. Finally it will need PIM enabled on the core facing interface itself. This is R2′s relevant config:

vrf definition A
 rd 100:1
 route-target export 100:1
 route-target import 100:1
 !
 address-family ipv4
  mdt default 239.10.10.10
 exit-address-family
!
ip multicast-routing
ip multicast-routing vrf A
!
interface Loopback0
 ip address 2.2.2.2 255.255.255.255
 ip pim sparse-mode
 ip ospf 1 area 0
!
interface GigabitEthernet1/0
 ip address 10.0.23.2 255.255.255.0
 ip pim sparse-mode
 ip ospf 1 area 0
 mpls ip
!
interface FastEthernet2/0
 vrf forwarding A
 ip address 10.0.12.2 255.255.255.0
 ip pim sparse-mode
 ip ospf 2 area 0
!
ip pim rp-address 3.3.3.3

R4 and R5 have a similar config.

At this point, multicast has not yet been enabled in the customer network. What we should see in the core is that all three PE routers are sending traffic to 239.10.10.10 – This is the MDT set up. All three will join the *,239.10.10.10 group. Once the PE routers all get traffic from other PEs over this group, they will all join the S,G group directly. We can verify this on R3:

R3#show ip mroute | beg \(
(*, 239.10.10.10), 00:12:46/00:02:45, RP 3.3.3.3, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet1/0, Forward/Sparse, 00:09:12/00:02:45
    GigabitEthernet2/0, Forward/Sparse, 00:12:46/00:02:30
    GigabitEthernet3/0, Forward/Sparse, 00:12:46/00:02:34

(5.5.5.5, 239.10.10.10), 00:12:46/00:03:16, flags: T
  Incoming interface: GigabitEthernet3/0, RPF nbr 10.0.35.5
  Outgoing interface list:
    GigabitEthernet1/0, Forward/Sparse, 00:09:12/00:02:45
    GigabitEthernet2/0, Forward/Sparse, 00:12:46/00:02:37

(4.4.4.4, 239.10.10.10), 00:12:46/00:03:14, flags: T
  Incoming interface: GigabitEthernet2/0, RPF nbr 10.0.34.4
  Outgoing interface list:
    GigabitEthernet1/0, Forward/Sparse, 00:09:12/00:02:45
    GigabitEthernet3/0, Forward/Sparse, 00:12:46/00:02:34

(2.2.2.2, 239.10.10.10), 00:12:46/00:03:01, flags: T
  Incoming interface: GigabitEthernet1/0, RPF nbr 10.0.23.2
  Outgoing interface list:
    GigabitEthernet2/0, Forward/Sparse, 00:12:46/00:02:34
    GigabitEthernet3/0, Forward/Sparse, 00:12:46/00:02:34

The OIL for *,239.10.10.10 is out to all three PE routers. All three are also sources, and so you see three S,G groups, each with a source of their loopbacks. This part is all automatic, so if I add another PE router, I just need to add it to the MDT group, enable PIM, and all the other PE routers will set up new GRE tunnels automatically.

Once the GRE tunnels are setup, the PE routers will form a PIM adjacency automatically over the multipoint tunnel inside the customer’s VRF. Let’s take a look at R2′s PIM neighbours:

R2#sh ip pim vrf A neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
10.0.12.1         FastEthernet2/0          00:03:17/00:01:23 v2    1 / S P G
4.4.4.4           Tunnel2                  00:01:20/00:01:23 v2    1 / S P G
5.5.5.5           Tunnel2                  00:01:49/00:00:53 v2    1 / DR S P G

fa2/0 is the CE-facing interface. Tunnel2 is the multipoint GRE interface going to R4 and R5. Over the tunnel interface R2 has two adjacencies. R5 is elected as the DR (thanks to its highest IP address)

As far as the customer is concerned, they can just run a standard multicast set up. It doesn’t what what version they are running either. I’ll make R9 announce itself as the BSR and RP, and that should filter to all other customer sites.

R9#sh run | inc candidate
ip pim bsr-candidate Loopback0 0
ip pim rp-candidate Loopback0 interval 10

Let’s go to R12 in another site to see if we get the RP mapping information:

R12#sh ip pim rp map
PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4
  RP 9.9.9.9 (?), v2
    Info source: 9.9.9.9 (?), via bootstrap, priority 0, holdtime 150
         Uptime: 00:08:15, expires: 00:02:17

At this point I should be able to join a group on R12 and source traffic from R9. Let’s test this:

R12
interface FastEthernet1/0
 ip igmp join-group 225.5.5.5
R9#ping 225.5.5.5 repeat 2
Type escape sequence to abort.
Sending 2, 100-byte ICMP Echos to 225.5.5.5, timeout is 2 seconds:

Reply to request 0 from 10.0.128.12, 88 ms
Reply to request 1 from 10.0.128.12, 132 ms

The CE network shows regular muticast working. Let’s check the PE routers mroute tables:

R5#sh ip mroute vrf A 225.5.5.5 | beg \(
(*, 225.5.5.5), 00:11:51/00:03:24, RP 9.9.9.9, flags: S
  Incoming interface: Tunnel1, RPF nbr 2.2.2.2
  Outgoing interface list:
    FastEthernet2/0, Forward/Sparse, 00:11:51/00:03:24

(9.9.9.9, 225.5.5.5), 00:02:00/00:01:29, flags: T
  Incoming interface: Tunnel1, RPF nbr 2.2.2.2
  Outgoing interface list:
    FastEthernet2/0, Forward/Sparse, 00:02:00/00:03:28

R5 shows that the incoming RPF interface is Tunnel1, which is the MDT GRE tunnel.

R2#sh ip mroute vrf A 225.5.5.5 | beg \(
(*, 225.5.5.5), 00:11:16/00:03:03, RP 9.9.9.9, flags: S
  Incoming interface: FastEthernet2/0, RPF nbr 10.0.12.1
  Outgoing interface list:
    Tunnel2, Forward/Sparse, 00:11:16/00:03:03

(9.9.9.9, 225.5.5.5), 00:02:23/00:01:06, flags: T
  Incoming interface: FastEthernet2/0, RPF nbr 10.0.12.1
  Outgoing interface list:
    Tunnel2, Forward/Sparse, 00:02:23/00:03:03

R2 shows Tunnel2 is in the OIL which again is the MDT GRE tunnel.

So our initial draft-rosen configuration is working as expected. Join me for part two where I’ll go a lot deeper into how those multicast packets are going from A to B

CCIE R&S Multicasting notes

Sparse mode:

  • When user joins, connected router will have *,G entry, as it knows the group joined by the user, but not the source of that feed yet. This is the SHARED tree.
  • The RP WILL have an S,G entry. Once the router connects it will stay on the *,G shared tree at first, then switch over to the S,G source tree.
  • While on the shared tree, the RPF check will be towards the RP, NOT the source! This will change to the source once the router connects to the source tree
  • ip pim spt threshold controls when the router will switch from the *,G to the S,G tree. A value of infinity means it’ll NEVER switch to the S,G tree

Dense mode:

  • In dense mode you’ll always see S,G entries as the source has been flooded through the network. i.e. all PIM routers will already have the feed, and hence will know the source

Sparse-Dense mode:

  • Sparse-dense mode is mainly like sparse mode, but any group that cannot be registered with the RP becomes dense mode
  • This is good for Auto-RP, but it also means that mis-configurations on the RP could cause lots of groups to be dense mode instead of sparse mode

Rendezvous point:

All modes:

  • You need to run PIM on the interface that you are advertising as the RP. i.e if you’re running it on a loopback interface, run PIM on that loopback!
  • Auto assignments OVERRIDE static assignments! You can use ip pim rp-address (rp_address) (acl) override to override this default behaviour

Static RP:

  • Easiest to configure, pretty much like a static route
  • ip pim rp-address (rp_address) (acl)
  • (acl) determines what groups the router will be the RP for
  • The RP router ALSO needs to have the above configured. i.e the RP does not automatically know it is the RP, you need to tell the router that it is!

Auto-RP (Cisco proprietary):

  • Auto-RP is made up of 1 or more routers announcing themselves as candidate RPs using the command: ip pim send-rp-announce (acl) as well as a mapping agent using the command: ip pim send-rp-discovery
  • Only the mapping agent listens to the RP announcements
  • The MA then determines which RP to use for which groups and advertises that to all other PIM routers
  • The auto-rp process uses the 224.0.1.39 and 224.0.1.40 groups
  • In sparse-dense mode the 2 groups above are automatically in dense mode
  • If running sparse mode only, you need to configure ip pim auto-rp listener on transit PIM interfaces which will ensure that ONLY 224.0.1.39 and 224.0.1.40 are in dense mode
  • If the MA receives 2 announcements from candidate RPs for the same groups, the MA will choose the one with the highest IP address
  • The RP and MA can be the same device if needs be
  • Auto-RP IS supported by a number of Non Cisco devices. Confirm with proctor is question is not clear
  • When specifying an ACL with a RP announcement, the deny statements will create negative entries for groups. However a deny any at the end of the ACL will effectively make ALL groups negative, and hence dense mode, regardless of what’s configured. As an example:

ip pim send-rp-announce Loopback0 scope 15 group-list 12 interval 1
access-list 12 deny 224.110.110.110
access-list 12 permit 224.0.0.0 15.255.255.255
access-list 12 deny any

If you check the mapping agent you see this:

Group(s) (-)224.0.0.0/4
  RP 150.1.10.10 (?), v2v1
    Info source: 150.1.10.10 (?), elected via Auto-RP

Once the deny statement at the end of the ACL is removed, you see this:

Group(s) 224.0.0.0/4
  RP 150.1.10.10 (?), v2v1
    Info source: 150.1.10.10 (?), elected via Auto-RP

Bootstrap router – BSR (Open standard)

  • BSR uses the group 224.0.0.13, but it does NOT need to be in dense mode, unlike auto-rp
  • 224.0.0.13 is the link-local ALL PIM ROUTERS address as well
  • The Bootstrap router is the Mapping Agent in auto-rp, configured using: ip pim bsr-candidate (int) (hash) (pri)
  • The RP uses the same name as in auto-rp, configured using: ip pim rp-candidate (int) (ttl) (pri) (acl)
  • The hash field will allow you to load balance groups over your RPs – A hash value of 31 ensures all even groups are with on RP and odds are with another, if you have 2 RPs
  • You can see which group is mapped to which RP with show ip pim rp-hash (group)
  • BSR has priority fields so you can choose which router does what without having to rely on the high IP taking over
  • If priority is the same, then highest IP will be chosen like auto-rp
  • ip pim bsr-border will allow you to run PIM on an edge interface, but not to share bsr messages

Multicast on Frame-Relay:

  • By default, multicast traffic is process switched over frame-relay. Multicast frames also have the pak-priority set so have the highest priority
  • If you have a hub router connected to 2 spokes over the same interface, you’ll have problem with RPF checks as you are going in and out the same interface. You can also have problem where a spoke says it no longer wants to receive a feed, this would make the hun stop sending to ALL spokes.
  • ip pim nbma-mode fixes both of the above issues. When configuring nbma mode on sparse-dense and dense mode interfaces you’ll get a warning, but it’ll still work.

Multicast Boundry:

  • int# ip multicast-boundry will stop multicast packets getting through an interface.
  • Any address ALLOWED through the acl is ALLOWED through the interface
  • The boundry is bidirectional by default. If you specify the IN option it will prevent multicast control traffic coming into the interface. If you specify the OUT option it will prevent the interface from being added to the OIL

Stub Multicast:

  • Prevent routers from fully participating in PIM. Consider the diagram:

multicast filter CCIE R&S Multicasting notes

  • R2 has users on the Fa0/0 interface that want to join the multicast feed. However you do not want R2 to fully run PIM. In order to do so, you need to have R1 configured to prevent R2 from becoming a PIM neighbour. You need then to configure R2 to forward IGMP join messages to R1. Note that dense mode is configured on R2 as it needs to flood multicast traffic over to fa0/0 when it gets an igmp join.

R1:
access-list 1 deny 192.168.1.2
!
int s0/0
ip pim sparse-mode
ip pim neighbor-filter 1

R2:
int s0/0
ip pim dense-mode
!
int fa0/0
ip pim dense-mode
ip igmp helper-address 192.168.1.1

Multicast/Broadcast conversion:

  • You can convert multicast to broadcast, and from broadcast to multicast. You can also do this multiple times backwards and forwards if you need to. Consider the following example:

multicast2 CCIE R&S Multicasting notes

  • We have a server with the IP of 172.16.1.25 that is sending out a udp broadcast to port 5000. For whatever reason we need that same frame to be broadcasted onto the 10.50.50.0/24 network.
  • To convert from broadcast to multicast you configure like so:

R1:
access-list 100 permit udp host 172.16.1.25 any eq 5000
!
ip forward-protocol udp 5000
!
int fa0/1
ip multicast helper-map broadcast 224.10.10.10 100

  • Then back from multicast to broadcast like so:

R3:
ip forward-protocol udp 5000
!
access-list 100 permit udp host 172.16.1.25 any eq 5000
!
int fa0/0
ip multicast helper-map 224.10.10.10 10.50.50.255 100
!
int fa0/1
ip directed-broadcast

MSDP:

  • MSDP is used for inter-domain multicasting as well as to allow RPs in a single AS to share source information if running anycast RP. All you need to configure on the RPs is:

ip msdp peer (peer_unique_address) connect-source (local_unique_interface)

ip msdp originator-id (unique_address_interface)

Bidirectional PIM:

  • You need to configure ip pim bidir-enable on ALL PIM interfaces. Most RP commands will then have the bidir switch added to the end of commands.

IPv6 Multicast:

  • Sparse mode only
  • Enabled using Ipv6 multicast-routing
  • When enabled, Pim will run automatically on all ipv6 interfaces. Need to run no ipv6 pim on the interface to remove
  • Multicast listener discovery (MLD), part of ICMPv6, replaces igmp
  • Mldv1 is equivalent to IGMPv2. Same timers and so on
  • Mldv2 equivalent to IGMP3 for ssm
  • BSR and Static RP only. No auto-rp. You can also use embedded RP which encodes the RP address in the group address
  • sh ipv6 Pim range-list will show you the rp mapping
  • ipv6 mld join-group will join a group on an interface
  • IPv6 mroute is created using ipv6 route (address/prefix) (next_hop) multicast, NOT ipv6 mroute
  • Pretty much everything else is the same as ipv4

Miscellaneous:

  • PIM assert will take the Administrative Distance into account first, and then metric if that AD matches. So in order to force a router to be the PIM assert router, you may need to adjust the AD
  • ip pim accept-rp (acl) will ensure the RP only accepts *,G joins for groups defined in the ACL. In theory you only need to put this on the RP, but it’s more efficient to put it on all routers so they drop the requests before it even gets to the RP
  • int#ip igmp (acl) is used to allow only joins to certain groups through an interface. This interface is pointing towards the receivers as the control packet is igmp
  • int#ip igmp limit is used to limit the amount of igmp states on an interface. Can also configure this globally
  • When configuring source specific multicast, you are required to configure ip pim ssm default or ip pim ssm (acl) on all PIM routers to ensure they do not create *,G entries. SSM does not actually need an RP as they do not connect to shared trees, only source trees.
  • On an ethernet segment a DR will be chosen between the PIM neighbours. By default, the priority is 1. If the priority is the same, the highest IP wins. This can be changed with the int#ip pim dr-priority command. Note that not all switches support this command!
  • If your IGP is load-balancing certain paths, you can load-balanse multicast as well with the ip multicast multipath command. This is essential so that your rpf checks don’t fail.

BSCI Labs – Multicasting

Multicasting at first can seem a bit difficult. However once you get your head around it, you’ll see just how powerful it really is. This lab uses the topology in this post: http://mellowd.co.uk/ccie/?p=66 – Feel free to adapt it to yours.

Multicasting lab 1:

  1. Just 1 lab this time. Configure the network as shown, creating a pim sparse network
  2. Ensure Router1 is the RP (Rendevous Point)
  3. Now configure the network for auto-rp, ensuring that routers 2 and 3 have an election to be the RP
  4. Create a multicast group 239.1.1.10 and add Router2 and Router3 to the group
  5. Ensure you can ping both routers using the multicast address

Multicasting 11 BSCI Labs – Multicasting