What is the limit? (was RE: multi-homing fixes)

Leo -

  Draw two curves, the first y=x/2, the second y=x^2
Move the value of x for y=1 for the first curve left by 2, 5 or 10
and it will still be surpassed by the second curve.
You will even see this for a second curve of y=x*2 or y=x.

  The global routing table size HAS grown exponentially
in the past. Rationalize it any way you want, blame whatever
you like, but there is no known way to construct a router that
can handle that kind of growth in anything but a short term,
and the trend for the components in the router growth curve
is simply not going to increase to a long term superlinear rate.

  A 10x system performance boost today just moves the x point for
y=1 of fundamental curve claimed by Moore's Law to the left
a few notches. Or are you claiming that routing equipment
will have a fundamentally different, and larger, growth curve
than other computing systems? (I think there is a basis for
claiming that there are some reasons which would support a
_shallower_ growth curve for routing equipment, actually).

  In short: are you claiming that the caeteris paribus assumption
in comparing Moore's Law to global routing table size is clearly false?
It would be nice to see even a partial proof of such a claim.

From anyone.

  Sean. (today's insult-free posting)

Ah, but exponential growth can't happen forever, and we can build
a system to handle the largest possible Internet (with v4, anyway).

If you had a router that could handle 2^32 prefixes, it will handle
the IPv4 Internet. Forever. The whole growth curve argument is
gone.

The global routing table cannot grow exponentially forever. There
are upper bounds all around, including but not limited to the number
of addresses. Over time the growth curve must change to be linear,
and then logarithmic..

For reference, there are approximately 10^80 electrons in the
universe (per several physics sources I found on the net). At
doubling every year that gives us an absolute upper bound of 265
years, if every route could be stored in a single electron. Figuring
we can probably only do one per atom, and averaging 4 electrons
per atom (is that high or low?) that gives us 106 years. We're 30
years into this IP thing, roughly, so we're 1/3 of the way there.

Not to minimze the short term issue, but to hand wave and say
"it's exponential and we'll never get ahead of it" is crap. It
won't be forever, so let's get ahead of it.

  Draw two curves, the first y=x/2, the second y=x^2
Move the value of x for y=1 for the first curve left by 2, 5 or 10
and it will still be surpassed by the second curve.
You will even see this for a second curve of y=x*2 or y=x.

[deleted]

  In short: are you claiming that the caeteris paribus assumption
in comparing Moore's Law to global routing table size is clearly false?
It would be nice to see even a partial proof of such a claim.
From anyone.

sorry to get pedantic here, but i'd be happy to.

when there is a fixed, finite, upper bound on the curve's growth
(because, as you well know, there is a fixed, finite, upper bound
on the number of prefixes that could be announced [say, in ipv4]),
it may assume exponential behavior at the beginning of its growth,
but it won't continue to be exponential until it reaches its
maximum. what happens is that there will be an inflection point,
and a tailing off of the approach to the limit point. which is
quite easy to get ahead of technologically.

the difference with moore's law is that the fixed, finite, upper
bound on the route table curve's growth is already technologically
FEASIBLE to handle, in it's ENTIRETY.

so your example functions above just don't cut it. there is no
infinite amount of prefix space that we need to worry about. it's
very finite, and currently (ipv4) not even terribly large (if people
allowed even /24's instead of /19's (say), and EVERYBODY split ALL of
their address space down to /24 announcements, we'd still only have on
the order of 2^24 ~ 10^8 prefixes, which is quite reasonable).

i mean, is anyone really trying to argue that it's difficult
computationally to update 10^8 entries at the rate that BGP
updates occur?

arguments along the lines of, "nobody should do anything until we
can guarantee that we can handle multihoming every host on the net"
are really just inappropriate rationales for enforcing restrictive
filtering policies.

i'll say it again: a /24 content provider might need to multihome
for good reachability to all of its clients, whereas a /16 provider
might need to multihome for reachability to remote locations (along
with reliability), and the /24 might very likely be attached to a
much larger sized pipe than the /16.

prefix length != need for multihoming. so filtering on it is a
pretty ham-handed way to keep prefix table size down.

s.