route converge time

Hi

I got a network with two routers and two IP transit providers, each with
the full BGP table. Router A is connected to provider A and router B to
provider B. We use MPLS with a L3VPN with a VRF called "internet".
Everything happens inside that VRF.

Now if I interrupt one of the IP transit circuits, the routers will take
several minutes to remove the now bad routes and move everything to the
remaining transit provider. This is very noticeable to the customers. I am
looking into ways to improve that.

I added a default static route 0.0.0.0 to provider A on router A and did
the same to provider B on router B. This is supposed to be a trick that
allows the network to move packets before everything is fully converged.
Traffic might not leave the most optimal link, but it will be delivered.

Say I take down the provider A link on router A. As I understand it, the
hardware will notice this right away and stop using the routes to provider
A. Router A might know about the default route on router B and send the
traffic to router B. However this is not much help, because on router B
there is no link that is down, so the hardware is unaware until the BGP
process is done updating the hardware tables. Which apparently can take
several minutes.

My routers also have multipath support, but I am unsure if that is going to
be of any help.

Anyone got any tricks or pointers to what can be done to optimize the
downtime in case of a IP transit link failure? Or the related case of one
my routers going down or the link between them going down (the traffic
would go a non-direct way instead if the direct link is down).

Thanks,

Baldur

Hey,

This is a complex problems and there are quite a few parts to consider.

Let's assume you want to optimize how fast you choose the right best exit
after a failure. The opposite ( how fast the internet chooses the best entry
point into your network after a failure ) is usually not that easy to
influence.

The first component of our total convergence time is how fast you can actually
detect the failure. If your bgp speaker is directly connected to the transit's
bgp speaker with no boxes inbetween, then you can detect the failure about as
fast as it takes your end to detect that the link is down, which is usually
pretty fast ( you could tune the carrier-delay if you want to ). If there are
any other boxes in-between , you can't rely on that. The best solution in that
case, imho, would be to use bfd. If you can't do that, you may want try and
tune bgp keepalive/holddown timers. Keep in mind that running aggressive
timers will consume cpu resources on both your and the provider's end.

The second component would be how much time it takes bgp to find the alternate
routes. As you're using l3vpn , there's an easy trick to apply here. You can
just set up a different rd on each router and both routers will end up with
routes from both providers in their bgp table. That will obviously consume
hardware resources ( usually ram, as not every route will make it to the fib
just yet ) so make sure your routers can handle it.

The third component would be how much time it takes you to update the fib
itself. This is usually fast for a single route, but not as fast as you might
think for ~550k routes. What you can do to speed this up depends somewhat on
your hardware. Most big vendors do support some flavor of a hierarchical fib
( cisco calls theirs pic core ). Keep in mind that this will also eat up
hardware resources depending on the implementation itself. Make sure you read
up before you try anything as it could end up doubling your fib requirements,
which aren't light to begin with for full tables.

Last but not least, keep scalabity in mind when reading the last 2 paragraphs.
On newer boxes, tuning for fast convergence may be more than fine for 2
providers but practically impossible for, say, 6 or 8 of them.

As for the scenarios of local failure, first of all, really try to make sure
that the ibgp session between them ( or towards their RRs/etc ) is as robust
as it gets. Assuming that's taken care of, convergence should be about as much
time as it takes your igp to figure it out. Bfd and usual igp timer/feature
adjustments do apply. Next-hop tracking and fast peering detection ( assuming
cisco ) are also nice, though if you have defaults in your network, you might
want to exclude them from being used for either.

My thoughts and words are my own.

Kind Regards,

Spyros

Baldur Norddahl <baldur.norddahl@gmail.com> writes:

Hi

I added a default static route 0.0.0.0 to provider A on router A and did
the same to provider B on router B. This is supposed to be a trick that
allows the network to move packets before everything is fully converged.
Traffic might not leave the most optimal link, but it will be
delivered.

The other thing here is the one of the main advantages of taking a full
routing table is so that you can be free of default routes.

Anyone got any tricks or pointers to what can be done to optimize the
downtime in case of a IP transit link failure? Or the related case of one
my routers going down or the link between them going down (the traffic
would go a non-direct way instead if the direct link is down).

With only two providers, route convergence is always going to be a
painful process. Especially if you're still using old equipment on your
edge.

But you shouldn't be losing transit links often enough for it to be a
major problem for your users. If you are, I'd start looking at other
options for transit.

You could also take smaller tables from a wider variety of providers.
Most folks in the wholesale transit business offer default routing and
customer specifics. This won't give you best path selection in the
truest sense but if you're connected to enough upstream providers it can
get you pretty close.

And if you're a content consumer rather than a content provider, go and
peer with anyone that has an open peering policy. Most important
content providers will peer with anyone that services customer and have
relatively flexible traffic minimums. Off the top of my head that's
facebook, google, netflix, yahoo, microsoft and several others.

I got a network with two routers and two IP transit providers, each with
the full BGP table. Router A is connected to provider A and router B to
provider B. We use MPLS with a L3VPN with a VRF called "internet".
Everything happens inside that VRF.

Now if I interrupt one of the IP transit circuits, the routers will take
several minutes to remove the now bad routes and move everything to the
remaining transit provider. This is very noticeable to the customers. I am
looking into ways to improve that.

Hi Baldur,

Buy a router with a beefier CPU. It takes a lot of operations to
remove the hundreds of thousands of stale routes from the RIB and
completely recalculate FIB.

I added a default static route 0.0.0.0 to provider A on router A and did
the same to provider B on router B. This is supposed to be a trick that
allows the network to move packets before everything is fully converged.
Traffic might not leave the most optimal link, but it will be delivered.

No. The router already has the alternate route in its RIB, just as
soon as the CPU can find time to remove the dead one and recalculate
the FIB. It won't get around to it any faster just because you also
have a default route in the RIB.

You -could- elect not to receive a full routing table -at all- and
then tie default routes to something in the partial table you accept.
Fewer routes = less recalculation. The trade off is that when the
problem is upstream from your particular link to a service provider,
it's less likely that you will recover from the error -at all- since
your router knows fewer of the individual routes. This will also
damage your ability to balance the load between the service providers.

Regards,
Bill Herrin

What types of routers are you currently using?

Would be helpful if you let us know what platform you're running on.
Assuming a Cisco, make sure next-hop-tracking not disabled (enabled by
default on modern IOS), then at "BGP Prefix Independent Convergence", so
your BGP process isn't walking the entire RIB to see which next-hops it
needs to change.

Greg Foletta
greg@foletta.org
+61 408 199 630

One thing I notice you don't mention is whether your
BGP sessions to your upstream providers are direct
or multi-hop eBGP. I know for a while some of the
more bargain-basement providers were doing eBGP
multi-hop feeds for full tables, which will definitely
slow down convergence if the routers have to wait
for hold timers to expire to flush routes, rather than
being able to direct detect link state transitions.

Matt

In that case multihop BFD (if supported on both sides) would really help.

Regards,
Jeff

Hi

The IP transit links are direct links (not multihop). It is my impression
that a link down event is handled with no significant delay by the router
that has the link. The problem is the other router, the one that has to go
through the first router to access the link the went down.

The transit links are not unstable and in fact they have never been down
due to a fault. But we are a young network and still frequently have to
change things while we build it out. There have been cases where I have had
to take down the link for various reasons. There seems to be no way to do
this without causing significant disruption to the network.

Our routers are 2015 hardware. The spec has 2M IPv4 + 1M IPv6 routes in FIB
and 10M routes in RIB. Route convergence time is specified as 15k
routes/second. 8 GB ram on the route engines.

Say transit T1 is connected to router R1 and transit T2 is connected to
router R2.

I believe the underlying problem is that due to MPLS L3VPN the next hop on
R2 for routes out through T1 is not the transit provider router as usual.
Instead it is the loopback IP of R1. This means that when T1 goes down, the
next hop is still valid and R2 is unable to deactivate the invalid routes
as a group operation due to invalid next hop.

I am considering adding a loopback2 interface that has a trigger on the
transit interface, such that a shutdown on loopback2 is triggered if the
transit interface goes down. And then force next hop to be loopback2. That
way our IGP will signal that the next hop is gone and that should
invalidate all the routes as a group operation.

Regards,

Baldur

Hi,

Why you not simply shut down the session upfront (before you turn down the link)?

Best regards

Jürgen Jaritsch
Head of Network & Infrastructure

ANEXIA Internetdienstleistungs GmbH

Telefon: +43-5-0556-300
Telefax: +43-5-0556-500

E-Mail: jj@anexia.at
Web: http://www.anexia.at

Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt
Geschäftsführer: Alexander Windbichler
Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601

Or, better yet, apply a REJECT-ALL type policy
on the neighbor to deny all inbound/outbound
prefixes; that way, you can keep the session
up as long as possible, but gracefully bleed
traffic off ahead of your work.

Matt

Route update via new policy could be more cpu intensive than dropping prefixes caused by session shutdown.

Best regards

Jürgen Jaritsch
Head of Network & Infrastructure

ANEXIA Internetdienstleistungs GmbH

Telefon: +43-5-0556-300
Telefax: +43-5-0556-500

E-Mail: jj@anexia.at
Web: http://www.anexia.at

Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt
Geschäftsführer: Alexander Windbichler
Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601

Of course. But operable routes remain throughout.

However, you don't want to reject the routes you want to depreference
them, both received and sent. And depreference them on the router
whose link will stay up first, so that it starts sending traffic via
its link before the router you're taking down changes its routes. Once
the preference change moves all routes to the other router, then you
want to drop the BGP session to deal with the residual routes. Then
once traffic drops to zero on the link, you take down the link.

If your customers are really that sensitive to downtime during a
reasonable off-hours maintenance window.

Regards,
Bill Herrin