IPv4 BGP Table Reduction Analysis - Prefixes Filter by RIRs Minimum Allocations Boundaries

Nanog Fellows,

I would like to share a work that may be of interest to some of you.

Although the BGP data is around one month old and the original focus was on Brazilian AS and IP prefixes, the general analysis covers all Regional Internet Registries (RIRs).

This work is part of an IPv4 BGP table reduction analysis and it is about IP prefixes filter by RIRs minimum allocations boundaries methodology (a well discussed Thread - Jon Lewis, et al).

The analysis uses BGP data from University of Oregon Route Views Archive Project (2007-10-23-2000) and estimate possible impacts considering suboptimal routing and unreachable destinations.

The methodology shows a good efficiency (around 40%) reducing BGP table size, but the estimated number of affect prefixes are also high (around 30%).

The report covers impact analysis per RIR and IP prefixes distribution. It also do some IP prefixes accounting and analyse each RIR contribution to BGP table size.

Here is the brief report URL:
http://www.intron.com.br/doc/bgp/gter24-en-summary.eascenco.bgp-table-red.rir-min-alloc.20071026.v-2007112901.html

Any comments or suggestions (list or private) are welcome.

Eduardo Ascen�o Reis
<eduardo@intron.com.br>

Eduardo :slight_smile:

The best of my thanks for this really good work; including tests for
prefix "reachability" is something I'd call essential.

I have btw implemented your prefix list filter additions (with the
exception of the RIPE lines; I want to see everything local), and
my table is down to 167K prefixes (from 234K).

Before anyone asks: I do get defaults from my transits :wink:

Yours,
  Elmar.

Measuring traffic following the defaults before and after implementation
of the filters could be interesting...

Best regards,
Daniel

Although the BGP data is around one month old and the original focus was on Brazilian AS and IP prefixes, the general analysis covers all Regional Internet Registries (RIRs).

[...]

The methodology shows a good efficiency (around 40%) reducing BGP table size, but the estimated number of affect prefixes are also high (around 30%).

This is an interesting piece of work, and highlights an interesting model (40% table size saving hurts 30% of traffic.)

I have a couple of thoughts:

from the text:

Although representing less than 1% of all suboptimal and unreachable prefixes, /20 prefixes call attention because of their mask size to be expected as normal. In this experiment all /20 affected prefixes are from 2 RIPE CIDR (62/8 and 212/7) with /19 longest prefix, which data could eventually be used by RIPE to reviews these CIDR policy allocations. This is only one use example of applications that can be derived from analysis like this one.

Do you still have the lab setup ? Could you work out what happens to the routing table and traffic routing if you permit one deaggregation per rir prefix ? I.e. This /19 is permitted to become two /20s, but it is not permitted to become four /21. My desire would be to see the resolved routing table look almost as trim as your 40% saving, but a significant amount of traffic routed as intended by the originating network.

Lastly, perhaps another comment for your recommendations and conclusions section could be that traffic is hurt most in this model for networks who deaggregate most. Lets encourage people who read this document to infer that aggregating their prefixes would improve their reach in the post 250k routing table world.

Andy

No, it hits 30% of the *routes*. I'll make a truly wild guess and say that
those 30% of routes actually only represent 0.3% of the *traffic* for most
providers, and the *only* people who really care are the AS that's doing
the deaggregate...

Eduardo - if you still have the lab setup and netflow/whatever data, is there
any way to tell if any of those 30% routes affected are in any way "high
traffic" sites?

The methodology shows a good efficiency (around 40%) reducing BGP
table size, but the estimated number of affect prefixes are also
high (around 30%).

This is an interesting piece of work, and highlights an interesting
model (40% table size saving hurts 30% of traffic.)

No, it hits 30% of the *routes*.

Yes, I completely agree, bad terminology indeed.

I'll make a truly wild guess and say that those 30% of routes actually only represent 0.3% of the *traffic* for most providers, and the *only* people who really care are the AS that's doing the deaggregate...

As you nearly point out, it's 100% of the traffic for some networks, and will still be high for other networks. The only way to feel confident that traffic is unaffected by routing table compression is to aggregate sensitively. This is where my "permit one deagg" question came from.

Andy

What positive effect do you hope to get from allowing one level of deagg beyond RIR minimums? The route table fat that's trimmed away by imposing an RIR minimums filter basically falls into 3 categories.

1) gratuitous deaggs done by networks devoid of clue. They see their CIDRs as collections of "class c's" and announce them largely as such with no covering aggregates.

2) traffic engineering deaggs. One would hope that a network with enough clue to be doing this would announce both the deagg and the covering RIR-assigned CIDR.

3) PA-space multihomers announcing small CIDRs (/24, /23, etc.).

I don't see how allowing one additional deagg beyond RIR minimums will help in any of these cases. Case 1 is hopeless. Case 2 doesn't generally need any help. Case 3, unfortunately is unlikely to see much benefit either, because their CIDRs are so small relative to RIR minimums.

As someone else suggested (I can't remember if it was in the earlier thread or private email) another filter that might be interesting to "run the numbers on" would be one that instead of using RIR minimums to build the filter, looked at actual RIR assignments. That would obviously be a much larger filter and might pose issues for either CPU load or config size.

Andy and fellows,

Sun, 2 Dec 2007 09:59:19 -0500, Andy Davidson <andy@nosignal.org> escreveu:

Do you still have the lab setup ? Could you work out what happens to
the routing table and traffic routing if you permit one deaggregation
per rir prefix ? I.e. This /19 is permitted to become two /20s, but
it is not permitted to become four /21. My desire would be to see
the resolved routing table look almost as trim as your 40% saving,
but a significant amount of traffic routed as intended by the
originating network.

I am travelling now (IETF meeting), but I think that when I come back the lab can easily be setup again reloading the same BGP data or from a different date.

You proposed basically a filter change in the setup. The only job for that is to edit the prefix list and run again the analyses. May be we can discuss more about that before running the test.

Lastly, perhaps another comment for your recommendations and
conclusions section could be that traffic is hurt most in this model
for networks who deaggregate most. Lets encourage people who read
this document to infer that aggregating their prefixes would improve
their reach in the post 250k routing table world.

I agree with you.

Thanks for your feedback.

Regards,

Eduardo Ascen�o Reis
<eduardo@intron.com.br>

Hi Valdis and fellow,

Sun, 02 Dec 2007 15:19:14 -0500, Valdis.Kletnieks@vt.edu escreveu:

Eduardo - if you still have the lab setup and netflow/whatever data, is there
any way to tell if any of those 30% routes affected are in any way "high
traffic" sites?

This is a great suggestion.

Does any of you now a public flow database (similar to Oregon archive) that can be used for that ? It would be nice to analyse each estimated affected prefix against real traffic data (netflow aggregated per prefix).

I only have access to netflow data from one AS from Brazil that for sure is not representative for others ASes outside Brazil.

Any idea ?

Eduardo Ascen�o Reis
<eduardo@intron.com.br>