OSPF Fast Re-Route and BFD on Junos

On September 18, 2013, in JNCIE, by Darren

One of the few advantages that EIGRP had over OSPF and IS-IS was that it had feasable successors. That is the router had already pre-calculated a route to a destination over a backup, non-looping, path.

OSPF and IS-Is has had this for sometime now on both IOS and Junos. It’s also supported on IOS-XR.

This post will mainly go over OSPF. The process is nearly identical for IS-IS.

To start I’ll be using the following topology:
FRR 1 OSPF Fast Re Route and BFD on Junos
R3 has two links to R4. This is going through a switch which will allow us to bring the link down without pulling the interface down. I’m configuring a cost of 100 on the first link and 1000 on the second as I don’t want to bring ECMP into play for this post.

How does a router know it’s neighbour is down? If the interface goes down the detection will be quick. If the interface stays up, but something alone the path is dropping packets, the router will take quite a long time to detect this.

If we leave OSPF to its defaults, it could be 40 seconds before R3 realises it cannot get to R4 over their primary interface (Standard dead timer on broadcast links). Until that happens R3 will be sending packets into the void.

I’ll set up standard OSPF on all interfaces. From R2 I’ll be sending pings to R5′s loopback. R3 and R4 are both tagged interfaces in different vlans. On the switch I can simply remove vlan 24 which will cause packets to be dropped over that vlan.

OSPF – No tweaking

Standard OSPF here with no tweaks. I’ll be showing R3′s config here:

darreno@M7i> show configuration protocols ospf
area 0.0.0.0 {
    interface lo0.3;
    interface fe-0/1/4.24 {
        metric 100;
    }
    interface fe-0/1/5.35 {
        metric 1000;
    }
}

I’ll now initiate a ping flood from R2 to R5. Once that starts I’ll remove vlan 24 from the switch.

Let’s see how the ping flood goes:

!!!.....................................................................!!!

Not very good at all!

OSPF – BFD

Let’s add BFD to the OSPF session on both R3 and R4:

darreno@M7i> show configuration protocols ospf
area 0.0.0.0 {
    interface all;
    interface lo0.3;
    interface fe-0/1/4.24 {
        metric 100;
        bfd-liveness-detection {
            minimum-interval 50;
            minimum-receive-interval 30;
            multiplier 3;
        }
    }
    interface fe-0/1/5.35 {
        metric 1000;
        bfd-liveness-detection {
            minimum-interval 50;
            minimum-receive-interval 30;
            multiplier 3;
        }
    }
}

Do the same test as above.

!!!!.!!!

Much much better. Note that this is a very small topology though so LSAs are very quick to flood. If you had a larger topology, especially if it spans geographic regions it could take much longer for the new route to be calculated.

OSPF – BFD & FRR

Now I’ll add FRR to OSPF on R3. I’ll protect the fe-0/1/4.0 link from R3′s point of view. R3 will run SPF for all it’s destinations through that interface and will know if it can get to any destination through any other interfaces without being looped. In this simple topology any traffic sent over the higher metric interface to R4 will still get to R5 as R4 will not send it back.

First we enable link-protection:

darreno@M7i> show configuration protocols ospf area 0 interface fe-0/1/4.24
link-protection;
metric 100;
bfd-liveness-detection {
    minimum-interval 50;
    minimum-receive-interval 30;
    multiplier 3;
}

Junos will pre-calculate the routes, but it will NOT add it to the FIB by default. You have to enable more than one next-hop in the FIB:

darreno@M7i> show configuration policy-options policy-statement BALANCE
then {
    load-balance per-packet;
}

darreno@M7i> show configuration routing-options forwarding-table
export BALANCE;

Let’s run the same test as above again:

!!!!!!!!!!!!!!!!!!!!!!

I’m simply not losing any at all. The difference between BFD alone and BFD and link-protection is most pronounced on much larger topologies. Remember FRR is a router making a local repair quickly to get packets form A to B while an alternative regular route is calculated.

You can see that enabling FRR is a piece of cake. To verify you need to dig a little deeper. First let’s see the FRR coverage on R3:

darreno@M7i> show ospf backup coverage
Topology default coverage:

Node Coverage:

Area             Covered  Total  Percent
                   Nodes  Nodes  Covered
0.0.0.0                2      3   66.67%

Route Coverage:

Path Type  Covered   Total  Percent
            Routes  Routes  Covered
Intra            5      11   45.45%
Inter            0       0  100.00%
Ext1             0       0  100.00%
Ext2             0       0  100.00%
All              5      11   45.45%

Not every single prefix can be covered as it’s quite topology dependant. If we look into the detail for specifically 5.5.5.5:

darreno@M7i> show ospf backup spf detail | find 5.5.5.5
5.5.5.5
  Self to Destination Metric: 101
  Parent Node: 10.0.8.10
  Primary next-hop: fe-0/1/4.24 via 10.0.24.4
  Backup next-hop: fe-0/1/5.35 via 10.0.35.4
  Backup Neighbor: 4.4.4.4
    Neighbor to Destination Metric: 1, Neighbor to Self Metric: 1
    Self to Neighbor Metric: 100, Backup preference: 0x0
    Eligible, Reason: Contributes backup next-hop

Here we see that fe-0/1/4.24 is the primary and fe-0/1/5.35 is the backup. The backup is also eligible. If we take a look at the route itself:

darreno@M7i> show route 5.5.5.5

inet.0: 24 destinations, 25 routes (24 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

5.5.5.5/32         *[OSPF/10] 00:03:15, metric 101
                    > to 10.0.24.4 via fe-0/1/4.24
                      to 10.0.35.4 via fe-0/1/5.35

Both routes are there, but only the first will be used until it fails.

Finally we can take a look at the FIB entry:

darreno@M7i> show route forwarding-table destination 5.5.5.5
Routing table: default.inet
Internet:
Destination        Type RtRef Next hop           Type Index NhRef Netif
5.5.5.5/32         user     1                    ulst 262142     5
                              10.0.24.4          ucst  1303     2 fe-0/1/4.24
                              10.0.35.4          ucst  1304     2 fe-0/1/5.35

The backup hop is already programmed ready to take over as soon as the primary fails.

Tagged with:  

VPLS on Junos signalled via LDP or BGP

On September 3, 2013, in JNCIE, by Darren

Continuing on from the L2VPN on Junos post, let’s switch focus to VPLS. CCC is a point to point technology and so out of the question. That leaves both LDP and BGP to do our VC label signalling. As always, you can use either LDP or RSVP for your transport label signalling.

Slightly different topology this time, as I’m using to test different ways for the CE to attach to the VPLS. For now we’ll simply focus on T1, C2, and T2:
VPLS1 VPLS on Junos signalled via LDP or BGP

All three CE WAN interfaces are in the same subnet running OSPF. The goal is for them to be able to reach each other’s loopbacks. As far as the CE devices are concerned, they are simply plugged into a ‘big switch’

LDP

I’ll concentrate on the PE R3 for this example. We first need to let the router know that the interface pointing towards T1 will in fact be a VPLS interface:

darreno@M7i> show configuration interfaces fe-0/0/1
encapsulation ethernet-vpls;
unit 0;

Our regular RSVP MPLS config, nothing special. Note that LDP is configured for the loopback interface:

darreno@M7i> show configuration protocols
rsvp {
    interface all;
}
mpls {
    label-switched-path TO-R6 {
        to 6.6.6.6;
        no-cspf;
    }
    label-switched-path TO-R7 {
        to 7.7.7.7;
        no-cspf;
    }
    interface all;
}
ospf {
    traffic-engineering;
    area 0.0.0.0 {
        interface all;
    }
}
ldp {
    interface lo0.3;
}

Finally the LDP VPLS config itself. As there is no auto-discovery you need to let Junos know what other PE routers are participating in this VPLS:

darreno@M7i> show configuration routing-instances
VPLS1 {
    instance-type vpls;
    interface fe-0/0/1.0;
    protocols {
        vpls {
            vpls-id 1;
            neighbor 6.6.6.6;
            neighbor 7.7.7.7;
        }
    }
}

I’ve matched the above configs on R6 and R7. Let’s take a look at the network from T1′s perspective:

USERT1@M7i:T1> show ospf neighbor
Address          Interface              State     ID               Pri  Dead
192.168.0.2      fe-0/1/0.0             Full      12.12.12.12      128    37
192.168.0.3      fe-0/1/0.0             Full      14.14.14.14      128    36

USERT1@M7i:T1> show route protocol ospf

inet.0: 9 destinations, 9 routes (9 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

12.12.12.12/32     *[OSPF/10] 00:05:43, metric 1
                    > to 192.168.0.2 via fe-0/1/0.0
14.14.14.14/32     *[OSPF/10] 00:27:15, metric 1
                    > to 192.168.0.3 via fe-0/1/0.0
224.0.0.5/32       *[OSPF/10] 2d 06:44:41, metric 1
                      MultiRecv

USERT1@M7i:T1> ping 12.12.12.12 rapid count 30
PING 12.12.12.12 (12.12.12.12): 56 data bytes
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
--- 12.12.12.12 ping statistics ---
30 packets transmitted, 30 packets received, 0% packet loss
round-trip min/avg/max/stddev = 1.085/1.338/6.357/0.935 ms

USERT1@M7i:T1> ping 14.14.14.14 rapid count 30
PING 14.14.14.14 (14.14.14.14): 56 data bytes
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
--- 14.14.14.14 ping statistics ---
30 packets transmitted, 30 packets received, 0% packet loss
round-trip min/avg/max/stddev = 1.081/1.496/11.077/1.784 ms

T1 considers T2 and C2 to be directly connected via L2. There is an OSPF neighbourship between all three and routes are learned. The data plane is also functioning correctly.

BGP

Let’s turn our attention now to BGP. There are a number of advantages to using BGP, especially if you already run BGP in the SP network. There is another address family which will not only advertise VC labels between PE routers, it will also allow PE routers to auto-discover any other PE configured in the same VPLS.

I’ll keep the interface config the same as above. You may notice that there is more configuration for the BGP version, but in the long run there is less config as that same BGP session is good for all your VPLS instances on the PE.

Let’s start with our BGP config:

darreno@M7i> show configuration routing-options autonomous-system
100;

darreno@M7i> show configuration protocols bgp
group iBGP {
    local-address 3.3.3.3;
    family l2vpn {
        signaling;
    }
    peer-as 100;
    neighbor 6.6.6.6;
    neighbor 7.7.7.7;
}

The BGP VPLS config is slightly different. We now have site-identifiers, but no manual neighbour config. As with our L3VPN set up, we need both RD and RTs configured.

darreno@M7i> show configuration routing-instances
VPLS1 {
    instance-type vpls;
    interface fe-0/0/1.0;
    route-distinguisher 100:200;
    vrf-target target:100:200;
    protocols {
        vpls {
            site T1 {
                site-identifier 1;
            }
        }
    }
}

We test from our CE once again:

USERT1@M7i:T1> show ospf neighbor
Address          Interface              State     ID               Pri  Dead
192.168.0.2      fe-0/1/0.0             Full      12.12.12.12      128    34
192.168.0.3      fe-0/1/0.0             Full      14.14.14.14      128    36

USERT1@M7i:T1> show route protocol ospf

inet.0: 9 destinations, 9 routes (9 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

12.12.12.12/32     *[OSPF/10] 00:03:34, metric 1
                    > to 192.168.0.2 via fe-0/1/0.0
14.14.14.14/32     *[OSPF/10] 00:04:26, metric 1
                    > to 192.168.0.3 via fe-0/1/0.0
224.0.0.5/32       *[OSPF/10] 2d 07:00:30, metric 1
                      MultiRecv

USERT1@M7i:T1> ping 12.12.12.12 rapid count 30
PING 12.12.12.12 (12.12.12.12): 56 data bytes
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
--- 12.12.12.12 ping statistics ---
30 packets transmitted, 30 packets received, 0% packet loss
round-trip min/avg/max/stddev = 1.061/1.480/10.779/1.728 ms

USERT1@M7i:T1> ping 14.14.14.14 rapid count 30
PING 14.14.14.14 (14.14.14.14): 56 data bytes
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
--- 14.14.14.14 ping statistics ---
30 packets transmitted, 30 packets received, 0% packet loss
round-trip min/avg/max/stddev = 1.079/1.183/1.394/0.088 ms

LDP & BGP

There is another way to get this to work. You can use BGP for auto-discovery, while using LDP to advertise the VC labels. This is the same way Brocade Netiron boxes do this, and inter-op is the only reason I would do it this way. If you have BGP running already, why not just let it do both discovery and VC assignment?

The configuration on PE R3 has been changed as follows:

darreno@M7i> show configuration protocols bgp
group iBGP {
    local-address 3.3.3.3;
    family l2vpn {
        auto-discovery-only;
    }
    peer-as 100;
    neighbor 6.6.6.6;
    neighbor 7.7.7.7;
}

darreno@M7i> show configuration routing-instances
VPLS1 {
    instance-type vpls;
    interface fe-0/0/1.0;
    route-distinguisher 100:200;
    l2vpn-id l2vpn-id:100:200;
    vrf-target target:100:200;
    protocols {
        vpls;
    }
}

CE-CE connectivity has been tested as above with no issues at all.

Tagged with:  

Brad Fleming from Kanren gave me remote access to a lab MX5 router in order to do the Junos section of this port for when I am very grateful!

There are many different needs for H-QoS and may different ways to configure it. I’m going to be going over one particular use case for H-QoS in which I use on a daily basis. More so than any other type of QoS, H-QoS is very hardware specific. Even line-card specific. In this post I’ll be using a Juniper MX5 and a Cisco ME3600X, both which allow me to do H-QoS on their gig ports.

My use case is as follows. Core gig ports are not cheap. ‘Revenue ports’ as ISPs like to call them. Most core kit has a load of gig ports, some 10Gb ports and maybe 40Gb/100Gb ports.

Not all customers want 1 gig link. Some want 10Mb, others 50Mb, some 300Mb. Heck some only want 4Mb. In order not to waste precious revenue ports, these circuits are aggregated into a single physical gig port. i.e. we can put 10 X 100Mb circuits onto a single gig link.

The bigggest problem with doing this is that it gets difficult to give QoS outbound back to the customer unless your hardware can do H-QoS. Let’s take the following port diagram as an example:

Port Junos and IOS QoS – Part 4 of 4 – Hierarchical QoS

The physical port is 1Gb. Here I have two customer circuits attached. Customer A is paying for 20Mb while Customer B is paying for 70Mb. Not only do I want to shape their respective queues, I also want to give 30% priority bandwidth to each customer, inside each queue. So I need to shape vlan 2000 to 20Mb, and inside that 20Mb ensure 30% is given to EF packets.

IOS

In IOS I create the child and parent policies.

policy-map 30_70
 class EF
  priority
  police cir percent 30 conform-action transmit  exceed-action drop
 class class-default
  queue-limit percent 100
!
policy-map 20Mb
 class class-default
  shape average 20000000
   service-policy 30_70
!
policy-map 70Mb
 class class-default
  shape average 70000000
   service-policy 30_70

Each policy can then attach to an EVC outbound on a physical port:

ME3600X#sh run int gi0/1
Building configuration...

Current configuration : 674 bytes
!
interface GigabitEthernet0/1
 switchport trunk allowed vlan none
 switchport mode trunk
 mtu 9800
 service instance 1 ethernet
  description CUSTOMER1
  encapsulation dot1q 2000
  rewrite ingress tag pop 1 symmetric
  service-policy output 20Mb
  bridge-domain 150
 !
 service instance 2 ethernet
  description CUSTOMER2
  encapsulation dot1q 2001
  rewrite ingress tag pop 1 symmetric
  service-policy output 70Mb
  bridge-domain 150
 !
end

Junos

H-QoS on Junos is done using a traffic-control profile. This allows you to shape to a specific rate, attach a scheduler inside that profile, and attach that profile to an interface.
First let’s create our schedulers and scheduler-map:

darreno> show configuration class-of-service schedulers
EF {
    transmit-rate {
        percent 30;
        exact;
    }
    priority high;
}
BE {
    transmit-rate {
        remainder;
    }
}

darreno> show configuration class-of-service scheduler-maps
OUTBOUND {
    forwarding-class expedited-forwarding scheduler EF;
    forwarding-class best-effort scheduler BE;
}

Now we create our traffic profiles and attach the above scheduler-map to it;

darreno> show configuration class-of-service traffic-control-profiles
20Mb {
    scheduler-map OUTBOUND;
    shaping-rate 20m;
70Mb {
    scheduler-map OUTBOUND;
    shaping-rate 70m;
}

Attach the profile to the interface under class-of-service:

darreno> show configuration class-of-service interfaces
ge-1/0/0 {
    unit 2000 {
        output-traffic-control-profile 20Mb;
    }
    unit 2001 {
        output-traffic-control-profile 70Mb;
    }
}

Note that you need to configure hierarchical-scheduler under the interface itself:

darreno> show configuration interfaces ge-1/0/0
hierarchical-scheduler;
vlan-tagging;

unit 2000 {
    description "Customer 1";
    vlan-id 2000;
}
unit 2001 {
    description "Customer 2";
    vlan-id 2001;
}

Verification

IOS still has much better verification than Junos. I don’t know why Junos makes it so difficult to view this kind of information. When using service instances in IOS as above, the verification command has changed a bit, somewhat annoyingly.

ME3600X#sh ethernet service instance policy-map
  GigabitEthernet0/1: EFP 1

  Service-policy output: 20Mb

    Class-map: class-default (match-any)
      578 packets, 45186 bytes
      5 minute offered rate 1000 bps, drop rate 0000 bps
      Match: any
  Traffic Shaping
    Average Rate Traffic Shaping
    Shape 20000 (kbps)
      Output Queue:
        Default Queue-limit 49152 bytes
        Tail Packets Drop: 0
        Tail Bytes Drop: 0

      Service-policy : 30_70

        Class-map: EF (match-all)
          0 packets, 0 bytes
          5 minute offered rate 0000 bps, drop rate 0000 bps
          Match:  dscp ef (46)
          Strict Priority
          police:
            cir percent 30 % bc 250 ms
            cir 6000000 bps, bc 187500 bytes
            conform-action transmit
            exceed-action drop
          conform: 0 (packets) 0 (bytes)
          exceed: 0 (packets) 0 (bytes)
          conform: 0 bps, exceed: 0 bps
          Queue-limit current-queue-depth 0 bytes
              Output Queue:
                Default Queue-limit 49152 bytes
                Tail Packets Drop: 0
                Tail Bytes Drop: 0

        Class-map: class-default (match-any)
          578 packets, 45186 bytes
          5 minute offered rate 1000 bps, drop rate 0000 bps
          Match: any
          Queue-limit 100 percent
          Queue-limit current-queue-depth 0 bytes
              Output Queue:
                Default Queue-limit 49152 bytes
                Tail Packets Drop: 0
                Tail Bytes Drop: 0
  GigabitEthernet0/1: EFP 2

  Service-policy output: 70Mb

    Class-map: class-default (match-any)
      501 packets, 39092 bytes
      5 minute offered rate 2000 bps, drop rate 0000 bps
      Match: any
  Traffic Shaping
    Average Rate Traffic Shaping
    Shape 70000 (kbps)
      Output Queue:
        Default Queue-limit 49152 bytes
        Tail Packets Drop: 0
        Tail Bytes Drop: 0

      Service-policy : 30_70

        Class-map: EF (match-all)
          0 packets, 0 bytes
          5 minute offered rate 0000 bps, drop rate 0000 bps
          Match:  dscp ef (46)
          Strict Priority
          police:
            cir percent 30 % bc 250 ms
            cir 21000000 bps, bc 656250 bytes
            conform-action transmit
            exceed-action drop
          conform: 0 (packets) 0 (bytes)
          exceed: 0 (packets) 0 (bytes)
          conform: 0 bps, exceed: 0 bps
          Queue-limit current-queue-depth 0 bytes
              Output Queue:
                Default Queue-limit 49152 bytes
                Tail Packets Drop: 0
                Tail Bytes Drop: 0

        Class-map: class-default (match-any)
          501 packets, 39092 bytes
          5 minute offered rate 2000 bps, drop rate 0000 bps
          Match: any
          Queue-limit 100 percent
          Queue-limit current-queue-depth 0 bytes
              Output Queue:
                Default Queue-limit 49152 bytes
                Tail Packets Drop: 0
                Tail Bytes Drop: 0
Tagged with:  

The old M10 I have in my lab cannot support the tunnel services PIC due to the ancient FEB it has. With the MX router and the correct line cards you can reserve some bandwidth to make a built-in tunnel PIC to use for things like GRE/Multicast/Logical-tunnels.

This is especially handy for when you have a single box and want to create a big topology with routers connected to each other. As a quick guide I’ll show an MX5 divided and connected into two logical-systems.

There are no physical interfaces plugged into anything. The box is simply on.

First we need to configure tunnel services:

darreno> show configuration chassis
fpc 1 {
    pic 0 {
        tunnel-services {
            bandwidth 1g;
        }
    }
}

This creates the lt interface in a specific place. Check this as you’ll need to know which numbers to refer to later:

darreno> show interfaces terse | match lt
lt-1/0/10               up    up

Let’s configure two systems. I’ll attach an lt interface to each, bind those two interfaces together and give each an IP address. I’ll also create a loopback interface in each and run OSPF:

darreno> show configuration logical-systems
J1 {
    interfaces {
        lt-1/0/10 {
            unit 0 {
                encapsulation ethernet;
                peer-unit 1;
                family inet {
                    address 10.0.0.1/24;
                }
            }
        }
        lo0 {
            unit 1 {
                family inet {
                    address 1.1.1.1/32;
                }
            }
        }
    }
    protocols {
        ospf {
            area 0.0.0.0 {
                interface all;
            }
        }
    }
}
J2 {
    interfaces {
        lt-1/0/10 {
            unit 1 {
                encapsulation ethernet;
                peer-unit 0;
                family inet {
                    address 10.0.0.2/24;
                }
            }
        }
        lo0 {
            unit 2 {
                family inet {
                    address 2.2.2.2/32;
                }
            }
        }
    }
    protocols {
        ospf {
            area 0.0.0.0 {
                interface all;
            }
        }
    }
}

To confirm I can log into one of them and check connectivity:

darreno> set cli logical-system J1
Logical system: J1

darreno:J1> show ospf neighbor
Address          Interface              State     ID               Pri  Dead
10.0.0.2         lt-1/0/10.0            Full      2.2.2.2          128    37

darreno:J1> ping 2.2.2.2 rapid
PING 2.2.2.2 (2.2.2.2): 56 data bytes
!!!!!
--- 2.2.2.2 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.432/0.492/0.675/0.092 ms

There are a few things to note. Logical tunnels are point to point. Even though the encapsulation is ethernet, you cannot connect more than 2 units to the same segment. You can also configure a unit on the main routing instance which can connect to a logical system. This is not only good for certification testing, but can open up all kinds of possibilities in a real-world design.

As long as you have a tunnel-services PIC on your M/T, or a Trio MPC/MIC on your MX router you are good to go with the above.

Tagged with:  

An ethernet physical port can only run at certain speeds. i.e. 10/100/1Gb/etc – Often customer will purchase a sublevel of bandwidth on that bearer speed. For example Customer A wants to buy 30Mb of bandwidth. You can’t run the physicla ports at 30Mb, so the ISP will have the interface run at 100Mb and police inbound at 30Mb.

This makes QoS jus a little more complicated. All the ratios we’ve used in the past will ratio themselves at the WAN port’s physical speed. Also the router will not know that if 40Mb of burst comes from the LAN, that the actual bandwidth is only 30Mb.

QOS3 Junos and IOS QoS – Part 3 of 4 – Shaping to EVC speed with priority

In this case, you need to first shape all traffic to 30Mb, and then inside that shaped queue give priory bandwidth to voice etc..

IOS

IOS uses the concept of parent/child policy maps. The parent will shape the queue, while the child policy attached will give each queue their respective bandwidths and priority.

policy-map PARENT
 class class-default
  shape average 30000000
   service-policy CHILD
!
policy-map CHILD
 class EF
  priority percent 10
  police cir percent 10 conform-action transmit  exceed-action drop
 class class-default
  bandwidth remaining percent 100
!
interface FastEthernet0/0
 ip address 10.0.0.1 255.255.255.0
 service-policy output PARENT

In this policy the parent policy creates a queue with a bandwidth limit of 30Mb. Inside that policy rests another that gives EF packets 10 percent of priority bandwidth of that initial 30Mb queue. I’m also policing that queue as I don’t want the priority queue to starve other traffic. All other traffic gets 90-100% of the bandwidth, depending on how much priority traffic is in the queue at any one time.

Junos

As with most QoS topics, the following configuration is quite hardware specific. I’ve done the following on an SRX210H. Your configuration might change when doing the same sort of thing on a M/MX/DC SRX/etc so YMMV.

Create the schedulers:

darreno@JR2> show configuration class-of-service schedulers
EF10 {
    transmit-rate {
        percent 10;
        exact;
    }
}
BE_REST {
    transmit-rate {
        remainder {
            100;
        }
    }
}

Put the above schedulers into a schedule-map:

darreno@JR2> show configuration class-of-service scheduler-maps
SCHEDULE {
    forwarding-class expedited-forwarding scheduler EF10;
    forwarding-class best-effort scheduler BE_REST;
}

Finally apply that map to the interface under class-of-service and configure the interface shape rate:

darreno@JR2> show configuration class-of-service interfaces ge-0/0/1
unit 2001 {
    scheduler-map SCHEDULE;
    shaping-rate 30m;
}

In order for the above to work I need to configure per-unit-scheduler on the physical interface:

darreno@JR2> show configuration interfaces ge-0/0/1
per-unit-scheduler;

Verification

Simple again in IOS:

R1#sh policy-map int fa0/0
 FastEthernet0/0

  Service-policy output: PARENT

    Class-map: class-default (match-any)
      106 packets, 6360 bytes
      5 minute offered rate 0000 bps, drop rate 0000 bps
      Match: any
      Queueing
      queue limit 64 packets
      (queue depth/total drops/no-buffer drops) 0/0/0
      (pkts output/bytes output) 106/6360
      shape (average) cir 30000000, bc 120000, be 120000
      target shape rate 30000000

      Service-policy : CHILD

        queue stats for all priority classes:
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops) 0/0/0
          (pkts output/bytes output) 0/0

        Class-map: EF (match-all)
          0 packets, 0 bytes
          5 minute offered rate 0000 bps, drop rate 0000 bps
          Match:  dscp ef (46)
          Priority: 10% (3000 kbps), burst bytes 75000, b/w exceed drops: 0

          police:
              cir 10 %
              cir 3000000 bps, bc 93750 bytes
            conformed 0 packets, 0 bytes; actions:
              transmit
            exceeded 0 packets, 0 bytes; actions:
              drop
            conformed 0000 bps, exceeded 0000 bps

        Class-map: class-default (match-any)
          106 packets, 6360 bytes
          5 minute offered rate 0000 bps, drop rate 0000 bps
          Match: any
          Queueing
          queue limit 64 packets
          (queue depth/total drops/no-buffer drops) 0/0/0
          (pkts output/bytes output) 106/6360
          bandwidth remaining 100%

We can see the entire queue is 30Mb. Inside that queue EF traffic has priority bandwidth of 3000kbps (10% of 30Mb) – All other traffic has anything left up to 30Mb

On Junos its a bit cryptic again:

darreno@JR2> show class-of-service interface ge-0/0/1
Physical interface: ge-0/0/1, Index: 135
Queues supported: 8, Queues in use: 4
  Scheduler map: , Index: 2
  Congestion-notification: Disabled

  Logical interface: ge-0/0/1.2001, Index: 71
    Shaping rate: 30000000
    Object                  Name                   Type                    Index
    Scheduler-map           SCHEDULE               Output                   2878

I wanted to do a more in-depth post on H-QoS but this SRX doesn’t support it. I don’t currently have an MX in the lab (only in the field) so hopefully soon…

Tagged with:  

© 2009-2014 Darren O'Connor All Rights Reserved