Bob Metcalfe wrote:

Perhaps I am confusing terms here. How can it be a fact that
"store-and-forward delays are a mere fraction of wire propagation delays?"
I don't think so. Check me on this:

Packets travel over wires at large fractions of the speed of light, but
then sadly at each hop they must be received, checked, routed, and then
queued for forwarding. Do I have that right?

Not "checked". Nobody computes checksums in gateways for transit packets.
Updating hopcount doesn't require recalculation of IP checksum, it can
be done incrementally (there was an RFC to that effect, don't remember which).

Forget checking, routing, and queueing (ha!), and you get, I think, that
store and forward delay is roughly proportional to the number of hops times
packet length divided by circuit speed (N*P/C).

Wrong. This is oversimplification, as slow tail-links consume bulk of
time. This is very comparable to going 90% of the way at 100mph and 10%
at 10mph -- what is the average speed? Right, that's 52.6 mph. Note
that 10% of the way slows down everything by a half.

But at 30 hops of thousand byte packets at T1 speeds, that's, what? 4,000
miles of prop delay. A mere fraction?

You won't find 30 hops at T-1 in the real life no matter how hard you
try. It's more like Ethernet-T1-FDDI-T3-FDDI-T3-T3-FDDI-T3-FDDI-T1-Ethernet-
T0-Ethernet. And, BTW, the average size of packet on Internet is 200 bytes.

Store-and-forward of 200-byte packet is 60 microseconds with T-3 wire
which is about 5 miles at light speed.

Moreover, large packets occur in bulk transfers, where sliding windows
are efficient -- to the effect that you see the delay only once, when you do
initial TCP handshake.

But of course, getting back to 1996, N*P/C doesn't count checking, routing,
and queueing -- queueing gets to be a major multiple with loading.

Queueing Theory 101 is recommended. If incoming traffic in G/D/1 system
is less than capacity the average queue size is less than 1. If load is more
than capacity the average queue size is infinity.

I.e. the delay in network in case of congestion depends on size of buffers
along the path, and not number of hops, period. The "best" size of buffers is
choosen to accomodate transient congestions, i.e. determined from bandwidth*delay
product ("delay" here is RTT). I.e. in properly tuned network the congestion
delay is about 2 times "ideal" RTT plus something to accomodate topological
irregularities. RED (by VJ and co.) allows to reduce size of congestion buffers
by about 1/2 because it actively anticipates congestions instead of acting
when they already happened as tail-drop does.

And, BTW, with IP you can skip the "store" part, too; you only need to look
at IP header to make a routing decision. The rest you can simply channel thru.

Oh, I forgot retransmission delays too, at each hop.

What's that? IP does not do any retransmissions between gateways.

And I forgot the increasing
complications of route propagation as hops increase...

He-he. That is far from being as simple as you think. Topology means
a lot more than the diameter for complexity of routing computations.

If I am, as you say, the first person to be concerned with the growth of
Internet diameter, which I doubt, then I deserve a medal. Or is my
arithmetic wrong? Ease my cluelessness.

You seem to offer opinions with little or no checking of background
facts. I bet you're a victim of Flame Delay crowd propaganda. I
certainly heard that line of reasoning from them, got a good laugh, too.

As for cluelessness -- i already noted that diameter grows as a logarithm
of network size, while bandwidth grows at least as linear to the size.
That means that the fraction of "store-and-forward" penalty in the end-to-end
delays is diminishing as network grows. So not only you're worrying about
insignificant thing, you appear concerned with something which actually
improves with scale!