Exchanges that matter...

> HDLC framing bytes = 3080633605 HDLC efficiency = 97.72
> ATM framing bytes = 3644304857 ATM efficiency = 82.61
> ATM w/snap framing bytes = 3862101043 ATM w/snap efficiency = 77.95

At a certain point, some of these arguments about ATM efficiency sound a bit
like saying FDDI is terrible because 4B/5B encoding is only 80% efficient.

I think a more interesting measure of the value of ATM versus other
wide-area technologies is some sort of measure of throughput per dollar.

The issue that everyone seems to be overlooking is network design
flexibility. Even if ATM has higher overhead, you can only grow a FDDI
network so much. It really isn't all that hard to max out most FDDI
switches.

Name one FDDI nap that is maxed out.

Nathan Stratton CEO, NetRail, Inc. Tracking the future today!

You must be kidding. Have you not noticed the 20% packet loss across
the bridge between the two gigaswitches at MAE-east? Sure, the
individual gigaswitches themselves might not be maxed out, but if the
bridge is saturated then they may as well be.

Alec

Yes, MFS needs to do this differently.

Nathan Stratton CEO, NetRail, Inc. Tracking the future today!

I've recently been on a hunt - I'm currently on a contract in the Bay
Area, but I am still doing some sysadmin for an ISP back in montana.

The Montana ISP is Sprint connected. For my local connection in Hayward,
CA, I signed up with PACbell, figuring that it would do until I found
another provider. After two months I had had enough and went on a quest
for another provider.

What I discovered is that If I did an extended ping to any provider in my
local calling area, I had at least 5% packet loss (the provider I am
using right now was at 5%). Pacbell was around 50%. Others varied. By
contrast, I can ping any sprint-customer-attached computer and have
almost 0% packet loss (1 out of 1000 lost occassionaly). Unfortunately I
couldn't find a local provider which was spirnt connected. (An aside:
anyone who knows a local priver in hayward which is sprint-connected is
more than welcome to contact me directly :slight_smile:

From traceroutes and additional extended pings, I could tell that some of

the loss was at exchange points - other times the loss was in internal
networks.

IS there a reason that I'm getting 5 - 50% loss outside of sprint? I've
also played a bit with this from a couple of other providers with similar
results.

As much as I hate to say anything nice about sprint, I'd have to say that
they're very good about not loosing packets, at least when they're BGP
hasn't imploded.

-forrestc@imach.com

By
contrast, I can ping any sprint-customer-attached computer and have
almost 0% packet loss (1 out of 1000 lost occassionaly).

Providers tend to have better connectivity within their own network.

IS there a reason that I'm getting 5 - 50% loss outside of sprint? I've
also played a bit with this from a couple of other providers with similar
results.

You paint with a pretty wide brush.

The loss is caused by atleast three things:

* ICMP packets are dropped by busy routers

Many routers drop ICMP packets (ping, traceroute) when busy, or alternate
dropping ICMP packets. I know that this behavior occurs when the packets
are directed to the specific router, I am not sure if this every occurs
for packets passing through. The standby tool ping needs a more reliable
replacement for testing end to end packet loss.

* Pipe smaller than needed

Some providers have a pipes smaller than they "need" going to the NAPs.
For a small provider this may be a 10 megabit connection. For sprint even
a 100 megabit connection may not be enough (they may need multiple
connections). With the Internet continuing to grow you can expect
periodic growing pains for specific providers that don't forecast far
enough into the future.

* Head of queue blocking in the Gigaswitch

This primarily effects the traffic of specific providers (who have enough
to fill up a 100 megabit pipe). This phenomenon occurs when your NAP
connection tries to talk to another providers filled up NAP connection.
Even though the Gigaswitch has input and output queues, your output queue
will block until the other providers input queue is free. When you block,
you drop packets destined for other potentially available connections.

However, this usually isn't a problem because most providers don't happen
to peer with Sprint (purely an example). In other words, if you don't
happen to exchange traffic with the overloaded party you won't see the
head of queue problem occur for your packets.

This problem can be fixed by multiple connections to the same NAP. (Which
many providers already have).

To summarize, the ability of a provider to get packets to and from other
providers is directly dependent on how much money they are willing to
spend to do that. By necessity they improve their internal network first.

Mike.

+------------------- H U R R I C A N E - E L E C T R I C -------------------+

[stuff cut]

The loss is caused by atleast three things:

* ICMP packets are dropped by busy routers

Many routers drop ICMP packets (ping, traceroute) when busy, or alternate
dropping ICMP packets. I know that this behavior occurs when the packets
are directed to the specific router, I am not sure if this every occurs
for packets passing through. The standby tool ping needs a more reliable
replacement for testing end to end packet loss.

in general the router isn't going to treat one protocol (i.e. protocols
running over IP (TCP, UDP, ICMP) differently when the packets are passing
through the router - it just looks at the header and forwards. ciscos do
handle pings for which the router itself is the destination at a lower
priority than packets going through the box. I'll leave the discussions
as to whether ping is adequate or not for another time....

[more stuff cut]

          dave

* ICMP packets are dropped by busy routers

   Many routers drop ICMP packets (ping, traceroute) when busy, or alternate
   dropping ICMP packets. I know that this behavior occurs when the packets
   are directed to the specific router, I am not sure if this every occurs
   for packets passing through. The standby tool ping needs a more reliable
   replacement for testing end to end packet loss.

There seems to be a great deal of (understandable) confusion on this
issue. Let's set it straight:

Packets which are _successfully_ forwarded through a (high end) cisco
router are not (by default) prioritized by protocol type. Packets which
are not forwarded require more work and are effectively rate limited (and
consume large amounts of CPU time). Some effects:

- Pinging a cisco is not a valid measure of packet loss. It's closer to a
CPU load measure than anything else.

- Pinging _thru_ a cisco is reasonable.

- Traceroute to a cisco is rate limited to one reply per second, so will
almost always miss the middle reply.

- Traceroute _thru_ a cisco may show many drops which would NOT be seen by
normal "thru" traffic. Replies generated by the cisco when the TTL expires
are again thru the CPU. So you may well traceroute thru a cisco which does
not reply at all. However, you can clearly see the route after that router.

   * Head of queue blocking in the Gigaswitch

   Even though the Gigaswitch has input and output queues, your output queue
   will block until the other providers input queue is free.

My (admittedly second hand) understanding is that the Gigaswitch/FDDI
actually has minimal amounts of buffering. During a congestion event, it
simply withholds the token, resulting in buffering in the routers. Queues
there eventually overflow, and ...

If this is incorrect, I would greatly appreciate pointers to the truth.

Tony

Interesting article on the matter in Communications International - Nov
25, 1996 entitled ISPs Divided Over Hub Bottlenecks by Ken Hart.

Hank Nussbacher