Category Archives: Juniper

My book – MPLS for Enterprise Engineers is now available from multiple channels

I put together a beginners MPLS book for Juniper. I’ve noticed, when interviewing candidates, that often they have a good knowledge of routing protocols, but lack in MPLS. This is to be expected unless they’ve worked at an ISP. The book is targeted towards those users.

J-net

https://www.juniper.net/us/en/community/junos/training-certification/day-one/networking-technologies-series/mpls-enterprise-engineers/

Amazon

http://www.amazon.com/dp/B00IU1KCJ0
http://www.amazon.co.uk/dp/B00IU1KCJ0

iTunes

https://itunes.apple.com/us/book/day-one-mpls-for-enterprise/id836201741?mt=11

Vervante (Print version)

http://store.vervante.com/c/v/V4081705490.html

Remote Triggered Black Hole Filtering and Flowspec

RTBH is a mature technology widely used to lower the effects of a DDOS attack against a customer of yours. While it works well, it’s a bit of a sledgehammer. Flowspec is a new technology that gives you a lot more control over what is blocked and as such it’s a lot more powerful.

I’ll be using the following diagram for this post:
RTBH Flowspec Remote Triggered Black Hole Filtering and Flowspec
P1 and P2 are edge routers peering with transit peers. R3 is a route-reflector which is peered to both P1 and P2. C1 is a customer attached to P3 originating their own address space (172.16.0.0/16)

RTBH

RTBH works on the concept of black-holing traffic towards an IP host/subnet. It does this by advertising a statically injected static route which has been pre-defined to have a next-hop to null0/discard.

As an example, let’s assume a host with the address 172.16.200.10 is under attack. R3, the RR is the route-injector, but it can be any of the internal iBGP routers. There is quite a bit of upfront config with RTBH, but most of this config only needs to be done once.

On all BGP routers in the core you need a route that will be discarded:

darreno@P1> show configuration routing-options
static {
    route 192.0.2.1/32 discard;
}

On all routers I want routes learned with a certain community to have their next-hop pointing to the discard route:

darreno@P1> show configuration policy-options
policy-statement BLACK-HOLE-FILTER {
    term 1 {
        from community BLACK_HOLE;
        then {
            next-hop 192.0.2.1;
        }
    }
}
community BLACK_HOLE members 65401:666;

I’m going to apply this an an inbound filter on my iBGP sessions:

darreno@P1> show configuration protocols bgp group ISP1
import BLACK-HOLE-FILTER;

Basically we are saying that any routes learned via BGP with the above community, set your next-hop to discard. On the route injector we set up an export policy matching static routes with a tag of 666. Any route matching will have the black hole community added. As this will be a specific route we need to ensure it doesn’t leave the confines of our AS and so we also tag no-export:

darreno@P3_RR> show configuration policy-options
policy-statement RTBH {
    term BLACK-HOLE {
        from {
            protocol static;
            tag 666;
        }
        then {
            local-preference 5000;
            community add no-export;
            community add BLACK_HOLE;
            next-hop 192.0.2.1;
            accept;
        }
    }
}
community BLACK_HOLE members 65401:666;
community no-export members no-export;

The above policy is then applied outbound on the iBGP session on the route-injector:

darreno@P3_RR> show configuration protocols bgp group ISP1
local-address 192.168.0.3;
export RTBH;

RTBH testing and verification

From a router out on the internet I can currently ping the affected host:

darreno@INTERNET> ping 172.16.200.10 interface lo0.0 rapid
PING 172.16.200.10 (172.16.200.10): 56 data bytes
!!!!!
--- 172.16.200.10 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 7.836/10.747/14.091/2.430 ms

I’ll now implement a black hole static on the route-injector:

set routing-options static route 172.16.200.10/32 next-hop 192.0.2.1 resolve tag 666

[edit]
darreno@P3_RR# commit and-quit
commit complete
Exiting configuration mode

If we ping from the internet again:

darreno@INTERNET> ping 172.16.200.10 interface lo0.0 rapid
PING 172.16.200.10 (172.16.200.10): 56 data bytes
.....
--- 172.16.200.10 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss

All packets lost. We can ensure only this /32 is affected by pinging another host in the subnet:

darreno@INTERNET> ping 172.16.200.50 interface lo0.0 rapid
PING 172.16.200.50 (172.16.200.50): 56 data bytes
!!!!!
--- 172.16.200.50 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 5.582/7.116/10.018/1.658 ms

Looking at the edge routers we see the learned /32, and the next-hop of discard:

darreno@P1> show route 172.16.200.10 extensive

inet.0: 17 destinations, 17 routes (17 active, 0 holddown, 0 hidden)
172.16.200.10/32 (1 entry, 1 announced)
TSI:
KRT in-kernel 172.16.200.10/32 -> {indirect(262143)}
        *BGP    Preference: 170/-5001
                Next hop type: Indirect
                Address: 0x97106d0
                Next-hop reference count: 3
                Source: 192.168.0.3
                Next hop type: Discard
                Protocol next hop: 192.0.2.1
                Indirect next hop: 94781d0 262143
                State: 
                Local AS: 65401 Peer AS: 65401
                Age: 2:15       Metric2: 0
                Task: BGP_65401.192.168.0.3+64669
                Announcement bits (2): 0-KRT 4-Resolve tree 1
                AS path: I
                Communities: 65401:666 no-export
                Accepted
                Localpref: 5000
                Router ID: 192.168.0.3
                Indirect next hops: 1
                        Protocol next hop: 192.0.2.1 Metric: 0
                        Indirect next hop: 94781d0 262143
                        Indirect path forwarding next hops: 0
                                Next hop type: Discard
                        192.0.2.1/32 Originating RIB: inet.0
                          Metric: 0                       Node path count: 1
                          Forwarding nexthops: 0
                                Next hop type: Discard

The /32 route has been learned through BGP from the route-injector. The correct communities are set. The next-hop goes to a route that is discard, and hence any packets going to this host are now discarded.

Adding and removing hosts are are simple as adding or removing routes on the route-injector.

The above works extremely well, but until the attack is finished and routes removed, that IP address is unroutable over the internet. Any traffic at all going towards it will be black-holed.

Flowspec

There is a more subtle way of doing the above. RFC5575 is the definition of a new filtering mechanism called flowspec. Oddly half the RFC authors are Cisco employess, yet as of today I can only find support for flowspec on Junos.

Essentially flowspec allows routers to advertise firewall filters to your edge BGP devices directly through BGP. Because this is a filter, it allows you to use all the actions of a regular firewall filter. Do you want to police DNS traffic only in a DNS amplification attack? Simple. Flowspec gives you the flexibility to do so.

The first part of enabling flowspec is to configure BGP to carry the NLRI. This will be done on all your internal routers:

darreno@P1> configure
Entering configuration mode

[edit]
darreno@P1# set protocols bgp group ISP1 family inet flow

[edit]
darreno@P1# commit and-quit
commit complete
Exiting configuration mode

Now let’s suppose 172.16.200.10 is under some kind of ICMP attack. I want to block all ICMP traffic to this host from the edge routers, but still allow other traffic through to the host:

root@R3_RR> show configuration routing-options flow
route BLOCK-ICMP-172.16.200.10 {
    match {
        destination 172.16.200.10/32;
        protocol icmp;
    }
    then discard;
}
term-order standard;

This router will now advertise this filter to all other iBGP peers.

Flowspec testing and verification

We can test this from the internet by trying to ping to this address, and then trying to FTP. Ping should fail, while FTP should be let through:

root@INTERNET> ping 172.16.200.10 source 192.168.50.1 rapid
PING 172.16.200.10 (172.16.200.10): 56 data bytes
.....
--- 172.16.200.10 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
root@INTERNET> ftp 172.16.200.10 source 192.168.50.1
Connected to 172.16.200.10.
220 C1 FTP server (Version 6.00LS) ready.
Name (172.16.200.10:root): darreno
331 Password required for darreno.
Password:
230 User darreno logged in.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp>

This works exactly as expected.

You can verify the flow NLRI coming in and applied as a filter on the edge routers:

root@P2> show route table inetflow.0

inetflow.0: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

172.16.200.10,*,proto=1/term:1
                   *[BGP/170] 00:12:57, localpref 100, from 192.168.0.3
                      AS path: I, validation-state: unverified
                      Fictitious

root@P2> show firewall

Filter: __default_bpdu_filter__

Filter: __flowspec_default_inet__
Counters:
Name                                                Bytes              Packets
172.16.200.10,*,proto=1                              2352                   28

172.16.200.10,* – meaning destination address 172.16.200.10/32 with any source – proto=1 is ICMP

Conclusion

  • Flowspec gives you a lot more options when it comes to filtering out DDOS attacks. Instead of isolating an IP you are able to filter specific traffic only. These firewall filters are then advertised via BGP to all your iBGP speakers.
  • As this is a firewall filter, you don’t have to specify a discard action. You can just as easily set a policing action.
  • Currently junos supports flowspec on both the inet and family-inet-vpn familes. So no v6 support yet
  • Most other vendors still don’t have working implementations

Restricting users to only view parts of the SNMP tree – Junos

This is a similar post to this one over here where I described how to do it in IOS: http://mellowd.co.uk/ccie/?p=2332

I recently had a project where I had to give certain customers full read-access to a subinterface on an Juniper SRX. I wanted them to see the system via SNMP, but only see their subinterface and subinterface stats.

The first part of this is getting the SNMP ifindex value. This you can get very easily:

darrenolocal> show interfaces ge-0/0/0.0 | match SNMP
  Logical interface ge-0/0/0.0 (Index 73) (SNMP ifIndex 656)

For this subinterface I need to reference 656

Now we create the view:

darrenolocal> show configuration snmp
view CUSTOMER1 {
    oid ifName.656;
    oid ifDescr.656;
    oid ifInErrors.656;
    oid ifOutErrors.656;
    oid ifOperStatus.656;
    oid ifInOctets.656;
    oid ifOutOctets.656;
    oid sysDescr.0;
    oid sysUpTime.0;
    oid sysContact.0;
    oid sysName.0;
    oid sysLocation.0;
    oid ifNumber.656;
    oid ifHCInOctets.656;
    oid ifHCOutOctets.656;
    oid ifIndex.656;
    oid ifNumber.*;
}

Here I’ve given them a number of oids to view that interface index value. I’ve also allowed them to see the system name, uptime, and various other things.

I now bind this view to a community and allow only the customer to view it:

darrenolocal> show configuration snmp
community CUSTOMER1 {
    view CUSTOMER1;
    clients {
        192.168.100.1/32;
    }
}

As a test, let’s do an snmpwalk from the monitoring station using this community now:

C:\snmpwalk>snmpwalk -v 1 -c CUSTOMER1 192.168.31.252
iso.3.6.1.2.1.1.1.0 = STRING: "Juniper Networks, Inc. srx210h internet router, kernel JUNOS 12.1X45-D10 #0: 2013-07
-04 06:05:04 UTC     [email protected]:/volume/build/junos/12.1/service/12.1X45-D10/obj-octeon/junos/bsd/k
ernels/JSRXNLE/kernel Build date: 2013-07-04 07:32:04 U"
iso.3.6.1.2.1.1.3.0 = Timeticks: (7437040) 20:39:30.40
iso.3.6.1.2.1.1.4.0 = ""
iso.3.6.1.2.1.1.5.0 = ""
iso.3.6.1.2.1.1.6.0 = ""
iso.3.6.1.2.1.2.1.0 = INTEGER: 48
iso.3.6.1.2.1.2.2.1.1.656 = INTEGER: 656
iso.3.6.1.2.1.2.2.1.2.656 = STRING: "ge-0/0/0.0"
iso.3.6.1.2.1.2.2.1.8.656 = INTEGER: 1
iso.3.6.1.2.1.2.2.1.10.656 = Counter32: 19240538
iso.3.6.1.2.1.2.2.1.14.656 = Counter32: 0
iso.3.6.1.2.1.2.2.1.16.656 = Counter32: 19226558
iso.3.6.1.2.1.2.2.1.20.656 = Counter32: 0
iso.3.6.1.2.1.31.1.1.1.1.656 = STRING: "ge-0/0/0.0"
iso.3.6.1.2.1.31.1.1.1.6.656 = Counter64: 19240538
iso.3.6.1.2.1.31.1.1.1.10.656 = Counter64: 19226558
End of MIB

A nice short walk giving them just what you want and nothing more.

L2VPN on Junos using CCC/Martini/Kompella

There are three main ways to provide a point to point L2 link over MPLS on Junos. Below I’ll give a brief description and show how to configure all three.

For the below descriptions I’ll be using this simple topology. All these devices are running as logical-systems on a single MX5.
L2vpn L2VPN on Junos using CCC/Martini/Kompella

CE Configs

My two CE devices will be configured the same for all three types below:
CE T1:

root@MX5:T1> show configuration interfaces ge-1/1/0
unit 0 {
    family inet {
        address 2.2.2.1/24;
    }
}

CE BB:

root@MX5:BB> show configuration interfaces ge-1/1/3
unit 0 {
    family inet {
        address 2.2.2.2/24;
    }
}

CCC

Circuit Cross-Connect is a legacy type of L2 point to point link found in Junos. CCC over MPLS requires a unique RSVP LSP per circuit per direction. It needs a unique LSP as there is no VC (inner) label. Any frames going over an LSP belongs to a specific circuit.

L2vpnCCC L2VPN on Junos using CCC/Martini/Kompella

Customers frames in either direction is encapsulated into another L2 frame with an RSVP label. This label determines which circuit the frame belongs to on the other side via the configuration.
CCC Header L2VPN on Junos using CCC/Martini/Kompella

The core network has already been set up with OSPF/RSVP/MPLS so I won’t go over that. I’ll concentrate on the PE boxes themselves. I need to create a regular RSVP LSP each way. The CCC config sits under the protocols connections stanza
R3

interfaces {
    ge-1/1/1 {
        encapsulation ethernet-ccc;
        unit 0 {
            family ccc;
        }
    }
}
protocols {
    rsvp {
        interface all;
    }
    mpls {
        label-switched-path TO-R6 {
            to 6.6.6.6;
            no-cspf;
        }
        interface all;
    }
    ospf {
        area 0.0.0.0 {
            interface all;
        }
    }
    connections {
        remote-interface-switch R6 {
            interface ge-1/1/1.0;
            transmit-lsp TO-R6;
            receive-lsp TO-R3;
        }
    }
}

R6 has a similar config back to R3. To confirm on the PE we can view via show connections:

root@MX5:R3> show connections | find R6
R6                                rmt-if      Up      Aug 26 16:41:00           2
  ge-1/1/1.0                        intf  Up
  TO-R6                             tlsp  Up
  TO-R3                             rlsp  Up

Finally we can test from the CE device:

root@MX5:T1> ping 2.2.2.2 rapid count 10
PING 2.2.2.2 (2.2.2.2): 56 data bytes
!!!!!!!!!!
--- 2.2.2.2 ping statistics ---
10 packets transmitted, 10 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.464/0.490/1.041/0.081 ms

Martini

Martini allows you to use the same LSP for multiple circuits. The circuits can be multiplexed as there is a second MPLS label in the stack. The bottom label is the VC label and it tells the PE what L2VPN the frame belongs to. Martini uses LDP to signal the VC label. Note that transport labels can use either LDP or RSVP. If using RSVP, you need to enable LDP on the loopback interfaces of both PE routers. Of course you still need at least two LSPs are LSPs are unidirectional.

The frame header will now contain an RSVP/LDP transport label as well as an LDP-signalled VC label. This allows the PEs to multiplex multiple circuits over the same LSP.
Martini Header L2VPN on Junos using CCC/Martini/Kompella

Martini is also configured under the protocols stanza.
This is R6′s config:

interfaces {
    ge-1/1/2 {
        encapsulation ethernet-ccc;
        unit 0 {
            family ccc;
        }
    }
}
protocols {
    rsvp {
        interface all;
    }
    mpls {
        label-switched-path TO-R3 {
            to 3.3.3.3;
            no-cspf;
        }
        interface all;
    }
    ospf {
        area 0.0.0.0 {
            interface all;
        }
    }
    ldp {
        interface lo0.6;
    }
    l2circuit {
        neighbor 3.3.3.3 {
            interface ge-1/1/2.0 {
                virtual-circuit-id 1;
            }
        }
    }
}

R3 and R6 should have a targeted LDP session and the circuit should be up:

root@MX5:R6> show ldp neighbor
Address            Interface          Label space ID         Hold time
3.3.3.3            lo0.6              3.3.3.3:0                39

root@MX5:R6> show l2circuit connections | find Neigh
Neighbor: 3.3.3.3
    Interface                 Type  St     Time last up          # Up trans
    ge-1/1/2.0(vc 1)          rmt   Up     Aug 26 17:28:31 2013           1
      Remote PE: 3.3.3.3, Negotiated control-word: Yes (Null)
      Incoming label: 299840, Outgoing label: 299904
      Negotiated PW status TLV: No
      Local interface: ge-1/1/2.0, Status: Up, Encapsulation: ETHERNET

All looks good. Check from CE to CE again:

root@MX5:T1> ping 2.2.2.2 rapid count 10
PING 2.2.2.2 (2.2.2.2): 56 data bytes
!!!!!!!!!!
--- 2.2.2.2 ping statistics ---
10 packets transmitted, 10 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.468/0.498/0.720/0.074 ms

Kompella

Kompella also uses a two label stack as Martini. The VC label is signaled via BGP. Once again your transport underneath can be either RSVP or LDP. Kompella has a lot more configuration than the above two, however as it uses BGP it means you can just add the l2vpn address family to your existing BGP deployment.

Kompella Header L2VPN on Junos using CCC/Martini/Kompella

I’ll show R3′s config here:

interfaces {
    ge-1/1/1 {
        encapsulation ethernet-ccc;
        unit 0 {
            family ccc;
        }
    }
}
protocols {
    rsvp {
        interface all;
    }
    mpls {
        label-switched-path TO-R6 {
            to 6.6.6.6;
            no-cspf;
        }
        interface all;
    }
    bgp {
        group iBGP {
            local-address 3.3.3.3;
            family l2vpn {
                signaling;
            }
            peer-as 100;
            neighbor 6.6.6.6;
        }
    }
    ospf {
        area 0.0.0.0 {
            interface all;
        }
    }
}
routing-instances {
    CUS1 {
        instance-type l2vpn;
        interface ge-1/1/1.0;
        route-distinguisher 100:100;
        vrf-target target:100:1;
        protocols {
            l2vpn {
                encapsulation-type ethernet;
                interface ge-1/1/1.0;
                site T1 {
                    site-identifier 1;
                    interface ge-1/1/1.0;
                }
            }
        }
    }
}
routing-options {
    autonomous-system 100;
}

Kompella uses a routing-instance exactly like a regular L3VPN in Junos. This includes the RD and RT. We can check the BGP session:

root@MX5:R3> show bgp summary
Groups: 1 Peers: 1 Down peers: 0
Table          Tot Paths  Act Paths Suppressed    History Damp State    Pending
bgp.l2vpn.0
                       1          1          0          0          0          0
Peer                     AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped...
6.6.6.6                 100         35         35       0       0       13:52 Establ
  bgp.l2vpn.0: 1/1/1/0
  CUS1.l2vpn.0: 1/1/1/0

Final confirmation is to ping from CE to CE:

root@MX5:T1> ping 2.2.2.2 rapid count 10
PING 2.2.2.2 (2.2.2.2): 56 data bytes
!!!!!!!!!!
--- 2.2.2.2 ping statistics ---
10 packets transmitted, 10 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.469/0.497/0.714/0.072 ms

Conclusion

  • CCC cannot multiplex multiple circuits over the same LSP, hence once you start ramping up your circuits it’s not at all scalable.
  • CCC however is supported on Juniper’s EX3200 range which gives you a cheap option to offer L2VPN over MPLS
  • Martini and Kompella both multiplex over the same LSP by signalling VC labels
  • Martini is simple to configure and makes sense if you’re a small ISP offering a couple of links here and there, especially if you’re already running LDP
  • Kompella is a lot more complicated to configure, however it runs off the back of BGP. This means that you can use your existing BGP topology to run L3VPN/L2VPN/MGVPN/etc with a single signalling protocol.
  • BGP also scales incredibly well so its the best option for larger deployments.

IPv6 over IPv4 MPLS Core Interop – IOS, Junos, Netiron – Part 2 of 2 – 6VPE

This is part two of my blog started here: http://mellowd.co.uk/ccie/?p=3300

Same diagram as last time:
multi vendor l3vpn IPv6 over IPv4 MPLS Core Interop – IOS, Junos, Netiron – Part 2 of 2 – 6VPE

This time each CPE is going to be connected to a VRF on the PE router. I’m only using one customer for this post, but this is regular L3VPN so scale as you see fit.

Major issue with the Netiron. It doesn’t support the VPNV6 adress family :( – I’m using the latest 5.4b code and nothing. So this means this is a Junos/IOS lab only

CPE config

All the CPEs are running BGP with their directly connected PE routers. All are advertising reachability to their IPv6 loopback addresses to their PE router. I’m only showing R6′s config as the others are the same with different addresses:

interfaces {
    ae1 {
        unit 36 {
            vlan-id 36;
            family inet6 {
                address 2001:db8:36::6/64;
            }
        }
    lo0 {
        unit 6 {
            family inet6 {
                address 2001:db8:6666::6666/128;
            }
        }
    }
}
protocols {
    bgp {
        group PROVIDER {
            family inet6 {
                unicast;
            }
            export LOOPBACK;
            neighbor 2001:db8:36::3 {
                peer-as 100;
            }
        }
    }
}
policy-options {
    policy-statement LOOPBACK {
        from {
            protocol direct;
            route-filter 2001:db8:6666::6666/128 exact;
        }
        then accept;
    }
}
routing-options {
    router-id 6.6.6.6;
    autonomous-system 65123 loops 2;
}

You’ll need to statically define your router-id for all sites. If a router is running ONLY IPv6, or your VRF ONLY has a IPv6 address, then the router has no IPv4 address to choose it’s router-id from. This will be a common theme throughout as you’ll also need to set router-ids in IPv6-only VRF instances.

PE config

Junos

First we need to set up the VRF to the customer and run BGP. We then need to enable the VPNV6 family in BGP. I’m going to remove the old IPv6 unicast config used in part one of this series.

USER3:R3> show configuration protocols
mpls {
    ipv6-tunneling;
    interface ae1.13;
}
bgp {
    group 6VPE {
        family inet6-vpn {
            unicast;
        }
        peer-as 100;
        neighbor 4.4.4.4;
    }
}

USER3:R3> show configuration routing-instances
CUSTOMER1 {
    instance-type vrf;
    interface fe-0/0/3.36;
    route-distinguisher 3.3.3.3:1;
    vrf-target target:100:1;
    routing-options {
        router-id 3.3.3.3;
    }
    protocols {
        bgp {
            group EXTERNAL {
                advertise-peer-as;
                family inet6 {
                    unicast;
                }
                neighbor 2001:db8:36::6 {
                    peer-as 65123;
                }
            }
        }
    }
}

IPv6 address family running with the customer. VPNv6 address family running with IOS PE R4. Note that I have to use ‘advertise-peer-as’ on R3 as Junos will not advertise a route to an AS that already has the AS number in the path by default.

IOS

The main issue with IOS is that I cannot statically definate a BGP router-id if I’m ONLY running IPv6. BGP requires a router-id on the x.x.x.x format. IOS does not give me the option to hard-code a router-id under the BGP process for the VRF, or the ipv6 unicast address family. So I had to enable the ipv4 address-family under the VRF and define a loopback address in the VRF to use as the router-id. Very silly indeed.

vrf definition CUSTOMER1
 rd 4.4.4.4:100
 !
 address-family ipv4
 exit-address-family
 !
 address-family ipv6
 route-target export 100:1
 route-target import 100:1
 exit-address-family
!
interface Loopback4
 vrf forwarding CUSTOMER1
 ip address 4.4.4.4 255.255.255.255
!
router bgp 100
 bgp router-id vrf auto-assign
 no bgp default ipv4-unicast
 bgp log-neighbor-changes
 neighbor 3.3.3.3 remote-as 100
 neighbor 3.3.3.3 update-source Loopback0
 !
 address-family vpnv6
  neighbor 3.3.3.3 activate
  neighbor 3.3.3.3 send-community extended
 exit-address-family
 !
 address-family ipv6 vrf CUSTOMER1
  no synchronization
  neighbor 2001:DB8:47::7 remote-as 65123
  neighbor 2001:DB8:47::7 activate
 exit-address-family

VRF assigned to the CE-PE link. IPv6 unicast running with the CPE and VPNv6 running with the Junos PE R3 router.

Verification

Let’s first check if our VPNv6 sessions are up:

7200_SRD_R4#show bgp vpnv6 unicast all   neighbors 3.3.3.3 | include state|fam$
  BGP state = Established, up for 03:09:47
    Address family VPNv6 Unicast: advertised and received
 For address family: VPNv6 Unicast
Connection state is ESTAB, I/O status: 1, unread input bytes: 0
USER3:R3> show bgp neighbor 4.4.4.4 | match "Estab|NLRI"
  Type: Internal    State: Established    Flags: 
  NLRI for restart configured on peer: inet6-vpn-unicast
  NLRI advertised by peer: inet6-vpn-unicast
  NLRI for this session: inet6-vpn-unicast

Sessions are up and running the VPNv6 family.

Can the CE’s ping each other from their IPv6 loopbacks?

USER7:R7> ping 2001:db8:6666::6666 source 2001:db8:7777::7777 rapid count 5
PING6(56=40+8+8 bytes) 2001:db8:7777::7777 --> 2001:db8:6666::6666
!!!!!
--- 2001:db8:6666::6666 ping6 statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/std-dev = 1.520/1.726/1.997/0.195 ms
USER6:R6> ping 2001:db8:7777::7777 source 2001:db8:6666::6666 rapid count 5
PING6(56=40+8+8 bytes) 2001:db8:6666::6666 --> 2001:db8:7777::7777
!!!!!
--- 2001:db8:7777::7777 ping6 statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/std-dev = 1.533/1.706/1.968/0.147 ms

No problems there :)