RE: [Q] BGP filtering policies

If you'll look at this pointer to one of ARIN's pages, it lists
the minimum allocation size for each CIDR block that IANA has
given ARIN to manage. From what I've seen, most providers accept
at least up to the prefix length that the RIR's are using, if not
longer.

http://www.arin.net/statistics/index.html#ipv4issued2002

Unfortunately, this doesn't help in your case. My company also
has /14's from the traditional class A space. I know of only one
case in two years where a customer reported a problem arising
from holding a small assignment out of these blocks, which was
ultimately corrected by renumbering the customer, a solution which
does not scale well.

Worst case, however, unless your UUNet connection goes down, you'll
still be able to reach most places via your other transit and peering
(since /24 is the closest thing to a "universal" allowed prefix length)
and will have full reachability via UUNet. IMHO, accepting up to /24
in any of the space listed on the above URL is good service provider
practice.

Transfer Log

The CIDR section is the part you're referring to?
   Transfer Log

which indicates /20.

Unfortunately, this doesn't help in your case. My company also
has /14's from the traditional class A space. I know of only one
case in two years where a customer reported a problem arising
from holding a small assignment out of these blocks, which was
ultimately corrected by renumbering the customer, a solution which
does not scale well.

I don't exactly anticipate this ever happening. My observation is
that the scaling will happen in the router area, i.e. as more and
more smaller blocks get announced out of the class A/class B space,
the ability of routers to hold more routes will tend to relax the
typical filtering policies as time goes on. In other words, by
the time we might encounter a problem, it'll no longer be a problem.

Your comment about renumbering is most apropos; if it's not a problem
for uunet to assign in swamp space now (i.e. "pre-renumbering"), then
this also disappears as an issue later.

Worst case, however, unless your UUNet connection goes down, you'll

It happens more frequently than you might expect.

<topic mode=rant>

Back when routers had small (relatively) small CPUs and (relatively)
small amounts of RAM I'd say that the filtering (and other nice
things such as flap dampening) was coined to stop these poor little
routers from dying.

But nowdays, routers have lots of CPU and lots of RAM.
Somehow people equate this to "can hold/munge larger routing tables".

Well, thats partly true. You've (practically) removed CPU and routing
from the table, but the speed of light is still the same, and the
routing protocols are still the same - so now what you'll be seeing
is that "stability" is actually a function of your network characteristics
_and_ router, rather than it mainly being the router.

Transmitting 100,000 routes still takes time. Even if your time to
parse and store your packet is 0, you'll still at least have the
route fill delay (how long it takes for routing information to travel
from your peer to you) and route propagation delay (how long it takes
for your route to appear all over the internet.) Since those aren't
0, they can add up - and no amount of router CPU or router memory is
going to (soley) fix it.
</topic>

2c, take with some salt, etc.

adrian

Speak for yourself. I think routers are hidiously under equipped with CPU
and RAM, and that which you can upgrade is still sold by the vendors at
insane prices the like of which you can only find in the blissful stupor
of ignorant customers.

There are two areas which limit the number of routes you can support.

The first is the longest prefix match lookup system, which must do an
increased amount of work on every packet. This has largely been eliminated
in "modern" routers, through the use of specialized hardware and/or the
use of an mtrie based FIB (like CEF) which uses a fixed size forwarding
table and makes all lookups nearly equal in cost regardless of the number
of routes (the only thing that could make this situation more difficult is
more address space, like IPv6).

The second is the routing protocols the the infrastructure necessary to
support them. This is where you start bloating your memory usage and
convergence time, which is DIRECTLY related to, you guessed it, both the
lack of RAM and CPU resources, and the oh so crappy code that the vendors
write. This is the area that "filter nazi's" (hi Randy) care about, not
because more routes is really harmful to the internet, but because it
impacts the memory usage and convergence times of their networks.