links on the blink (fwd)

Hans-Werner,

router). Historically that was often a router problem, as they were too
slow to deal with the onslaught of packets for a plain
packet-per-second-rate (remember, in 1987 the NSF solicitation asked
for a then whopping 1000 packets per second per router, which was just
barely achievable then). Today you can buy technology off the shelf
that does not have a pps problem for typical situations. So what is the
problem, if it is not the rouuter interconnection or the router
technology? The answer is bad network engineering, little consideration
for architectural requirements, and lack of understanding for the
Internet workload profile. Intra-NSP, perhaps even more among NSPs. Or,
in other words, it is people that kill the network, not the routers or
phone lines, particularly people who are trying to make money off it,
probably using their unique optimization function focused on profit
and limiting expenses as much as they can, not understanding the fate
sharing yet.

I disagree with you about the adequacy of routers you can buy off the
shelf, and in fact would reach an exactly opposite conclusion. I think
we are reaching the end of the ability to support the core of the U.S.
Internet (once the NSFnet, now the collection of high-end NSPs) with
routers you can obtain now in the fashion to which we've become
accustomed. In fact I think we're fast approaching the state of
the 56kbps network just before the deployment of the factor-of-8
bandwidth increment in trunk bandwidth that the IBM RT network
provided, only at a bandwidth level 2.5 orders of magnitude higher,
and I think the sagging at the center of the Internet is taking the
edges with it due to the lack of push-me-pull-you incentive to keep
the edges growing.

I am old enough to be able to assemble the following timeline for upgrades
of the U.S. Internet core trunk bandwidth over the last 10 years, along with
the corresponding increase in local interconnect bandwidth. Feel free
to correct the dates, but I don't think I'm too far off.

  1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996
---|-----|------|------|------|------|------|------|------|------|------|---
                   x8 x3.3 x14 x2 ^
56kbps 1/3 T1 full T1 1/2 T3 full T3 ?
                                           x10
      Ethernet FDDI ?

Given that it is now almost 1996, and the growth rate of the Internet
has shown no signs of having slowed, what would you extrapolate we should
have been working on deploying about now? My best guess would be that we'd
be due for another big increment, say a factor of 12 or so, both in backbone
and in interconnect technology, to take us another couple of years, had
we been following the historical rollout of new technology. Yet not only
can you not buy OC-12 routers off the shelf, or anywhere else, you can't
even buy honest OC-3 routers at this point (I will avoid progressing into
a rant on how the bizillions invested in ATM development to produce very
little of practical use so far might have been better spent...).

And I would suggest that if you were, say, a big phone company, and you
actually understood, in your own inimicable big phone company way, that
percentage packet loss rates in your infrastructure with anything other
than zeros to the left of the decimal point were unacceptable, and you
were willing, at least for now, to do whatever you could to build, maintain
and grow a high quality Internet infrastructure even if you hadn't yet
figured out how to make a profit from it, you would still find that meeting
your traffic growth projections with even the most creative arrangements
of 5-slot T3 routers and whatever else you could buy to help them along
to be a bleak prospect. There comes a point where you just run out of
router bandwidth, and nothing but more router bandwidth is going to fix
it, but the bigger bandwidth boxes are no where to be found.

So we've got routing problems front and center, here and there, with
bandwidth problems creeping up behind. We've got some companies with
relatively deep pockets, or which are flush with IPO money, which would
very probably spend to fix it if they could, if only to avoid being
featured on the 10 o'clock news when disasters occur, except there doesn't
seem to be anything to spend the money on which is clearly going to fix
anything. I don't think this is a happy state to be in, in fact it sucks,
but I don't think it is correct to attribute this state to counter-productive
profit motives. I think we're victims of our having own success creep up
to and pass the technology when we weren't paying close enough attention,
and the only thing left to do seems to be to try to play catch-up from
a position of increasing disadvantage.

Dennis Ferguson

to be a bleak prospect. There comes a point where you just run out of
router bandwidth, and nothing but more router bandwidth is going to fix
it, but the bigger bandwidth boxes are no where to be found.

Are you sure that creative ways of using lots of smaller T3 bandwidth
boxes couldn't solve the problem?

If we assume that bandwidth on the lines is not a problem (no shortages)
and that T3 routers with smaller routing tables could make effective use
of the bandwidth, then is it possible to do the following?

In Hypothetica, PA are ABC ISP who has a T1 to Sprint and XYZ ISP who has
a line to MCI. Both have so-called portable addresses from the swamp and
thus consume space in the core routing tables. This means that traffic
from ABC to XYZ travels from Hypothetica to Pennsauken, thence to MCI and
back to Hypothetica. However, suppose we clean up the swamp by simply
removing it entirely from all the core routing tables. What then? Every
provider puts a default route in each core router. This default route
points to a special router whose job is to just deal with the swamp
routes and nothing else. In effect we are partitioning the routing tables
in two. Under this regimen packets from ABC to XYZ travel to Pennsauken,
then follow the default to Fort Worth and thence to Chicago where the
swamp router lives. The swamp router uses a separate continental backbone
to route the traffic back to Fort Worth, back to Pensauken and thence to
MCI where the traffic takes a similar circuitous route before reaching
Hypothetica.

Seems terribly wasteful of bandwidth doesn't it? But if something like
this can help prevent routers from flapping and if bandwidth is
avaialbale, perhaps it could work. If the parallel lines carrying "swamp"
traffic are of lower bandwidth than the main lines and suffer congestion,
then I suppose ABC could simply renumber to be within Sprint's aggregate
and be back on the mainline.

In fact, if this really is a viable technical solution, perhaps the
threat of deployment would cause a rush of renumbering and make it easier
for NSP's to just say no to swamp addresses.

seem to be anything to spend the money on which is clearly going to fix
anything. I don't think this is a happy state to be in, in fact it sucks,

If you are right, then yes it sucks. Obvoiusly the ATM and OC3
technologies are right where you have pegged them, but what about
parallelism using existing DS3 technology? And if this is done, are there
mux/demux boxes that can handle DS3's<->OC3 ?

profit motives. I think we're victims of our having own success creep up
to and pass the technology when we weren't paying close enough attention,
and the only thing left to do seems to be to try to play catch-up from
a position of increasing disadvantage.

One nice side effect is that this may force the video-on-demand folks off
the Internet and into straight ATM instead. I rather like the future
scenario where the globe is girdled by an IPng data network and a separate
parallel video/ATM network.

Michael Dillon Voice: +1-604-546-8022
Memra Software Inc. Fax: +1-604-542-4130
http://www.memra.com E-mail: michael@memra.com