NSP ... New Information

Paul Ferguson writes...

>
>Suppose that IP space were not a problem. When IPv6 comes up all around,
>it certainly won't be then. You can have and waste all the IP space you
>could imagine, since we'll be numbering every atom in the known universe.
>

I continue to hear this used as a 'compelling reason' to
urge IPv6 migration, when in reality it appears to be nothing
more than people trying to do an 'end around' the address
allocation policies by thinking that once v6 is deployed,
the allocation policies will dissappear.

I wasn't trying to compel IPv6 with this. Instead, I was trying to show
that the problem is really not one of IP space.

Simply because one increases the available amount of address
space does not in any way imply that allocation policies will change
significantly. If they did, and the number of routes increased
significantly, we would have much larger problems in the global
routing system than we would with people whining about not being
able to obtain large enough address allocations.

Right. But people see it as such a problem because the routing policies
are IP space derived. When people are told they need a /19 to be routable,
then they begin to go backwards on solving the IP space problem and resume
wasting it (but hiding the waste to look like its used).

When the need to justify space usage occurred, along with it came some ideas
on actually how to do that. And I see that working. We were projected to
run totally out of space by now, and since we have not, I assume it did work
pretty well.

But the real problem is routing policies that are encouraging people to go
back to wasting space. By using the network size as the criteria for doing
route filtering, the smaller guys get screwed and they see their solution
as inflating their network. This practice needs to be stopped or a better
solution needs to come out of it.

Suppose TCP/IP had been designed from the beginning with 64-bits of flat
address space divided 32/32. We would not have the space crunch at all
AND there would be no space "handle" for routing policies to lean on to
screw the little guys. Tell me what the big boys with small routers would
do in this case today? Even the biggest router has no chance with a billion
routes. Or would we have been forced to come up with a new and better
replacement for BGP(4) by now that does dynamic intelligent aggregation
or something?

Right. But people see it as such a problem because the routing policies
are IP space derived. When people are told they need a /19 to be routable,
then they begin to go backwards on solving the IP space problem and resume
wasting it (but hiding the waste to look like its used).

But this is somewhat of a misnomer. It is not an issue of being
'routable' v. 'non-routable', but rather, one of whether you can
be aggregated into a larger prefix. This practice encourages
aggregation -- it is commonly agreed that Aggregation is Good (tm).

The routability issue comes into play when:

o You are specifically referring to routes being propagated by
   a service provider who uses prefix-length filters, AND

o You cannot be aggregated into a large enough advertised CIDR
   block to conform to these types of filters.

When the need to justify space usage occurred, along with it came some ideas
on actually how to do that. And I see that working. We were projected to
run totally out of space by now, and since we have not, I assume it did work
pretty well.

BGP4, CIDR, or Die.

But the real problem is routing policies that are encouraging people to go
back to wasting space. By using the network size as the criteria for doing
route filtering, the smaller guys get screwed and they see their solution
as inflating their network. This practice needs to be stopped or a better
solution needs to come out of it.

One might suggest that some of the prefix length filter could be
replaced by more aggressive dampening policies.

- paul