NAP/ISP Saturation WAS: Re: Exchanges that matter...

From: Jim Van Baalen <vansax@atmnet.net>
I have a question that fits this topic. Why does everybody seem to be so
sold on Gigaswitch based Xchange points?

A pretty good reason: it worked a lot better than everything else tried!

As a bit of history, the only NAP that worked was with ethernet, which
seemed a lot faster to most folks than the T1 links coming into the
NAPs. But, the implementation was terribly flaky as a MAN technology.

Then, ethernet got too congested, and folks moved to FDDI. Then, FDDI
got congested and GigaSwitches were put in. They were the only thing
available, and they made everything work better.

My guess is that the next step will toss the LAN/MAN model for
exchange GigaRouters directly connected by OC-3c/STM-1 and OC-12c/STM-4
WAN links -- but that's only a guess. It seems to be working. We are
driven by things that actually work.

Based on membership and traffic it
appears that there is still a stigma associated with Xchanges (PBnap and AADS
for example) that have chosen different architectures. It was also my
impression that people were much more critical of these "other" NAPS at the
recent NANOG than SprintNAP and the MAEs.

That's because the ATM NAPs were started at the same time as MAE-East,
but didn't work! They all took more than an extra year to get working.

Oh, yeah, they cost a lot of money for some projects that really
couldn't afford it, and it all went down the drain.

Folks have a tendency to be critical of stuff with a history of failure.

In addition,
with new line cards due out early next year, the BPXs will support ABR and,
relatively speaking, huge buffers at high density OC3 and 2 port OC12.

Well, since they don't exist, why would anyone bank on deploying them?
Let us know how well they work in a year or two.

Meanwhile, PPP/SONET has been deployed for over 6 months at these
speeds, and we are already getting experience with it. Yep, needs
bigger buffers on those Ciscos, and PMC-Sierra didn't follow the SONET
spec exactly, but ...

    experience is a much better teacher than promises.

WSimpson@UMich.edu
    Key fingerprint = 17 40 5E 67 15 6F 31 26 DD 0D B9 9B 6A 15 2C 32
BSimpson@MorningStar.com
    Key fingerprint = 2E 07 23 03 C5 62 70 D3 59 B1 4F 5E 1D C2 C1 A2

Let me add one more word to this discussion. In case of simple and solid
FDDI or Ethernet protocol it's not big chance if the whole switch would be
failed down by one crazy interface card or one crazy router.
In case of ATM nobody can protect the whole ATM system from being failed down due
to some software bug since 1 / 5 after system was installed, because
the number of different featires in ATM network and ATM interface is more
then 10 - 100 times more in comparasion with FDDI. This means - if your
FDDI switch works fine just now, it's more than 99% it'll work next
2 years (may be it'll be saturated but would not crash totally); in case
of ATM it's more than 20% the whole system would be crashed by some new
neighboar with some new ATM software...

It's not my experience, but one of our partners are debbugging simple
direct ATM link just about 3 months - there is 3 vendors (ATM provider,
ATM's provider vendor, CISCO) and it's not possible to determine why
this link loss more than 50% of packets sometimes. This is due to
ATM is too complex system...