923Mbits/s across the ocean

Hi,

#High speed at reasonable costs are the end-goal. However, it is important
#to be able to plan for when one will need such links, to know what one
#will be able to achieve, and for regular users to be ready to use them
#when the commonly available. This takes some effort up front to achieve
#and demonstrate.

And that's the key point that I think folks have been missing so far about
all this. Internet2 provides excellent connectivity to folks who generally
have a minimum of switched 10Mbps ethernet connectivity, and routinely switched
100Mbps connectivity. However, if you look at the weekly Abilene netflow
summary reports (see http://netflow.internet2.edu/ , or jump directly to a
particular report such as http://netflow.internet2.edu/weekly/20030224/ )
you will see that for bulk TCP flows, the median throughput is still only
2.3Mbps. 95th%-ile is only ~9Mbps. That's really not all that great,
throughput wise, IMHO.

Add one further element to that: user expectations. Users hear, "Wow,
we now have an OC12 to Abilene [or gigabit ethernet, or an OC48, or an
OC192], I'll going to be able to *smoke* my fast ethernet connection ftping
files from <insert far away place>!" ... but then they find out that no, in
fact, if they are seeing 100Mbps for bulk TCP transfers, then they are in
true throughput elite, the upper 1/10th of 1% of all I2 traffic.

SO! The I2 Land Speed Record is not necessarily about making everyone
be able to do gigabit-class traffic across the pond, it is about making
LOTS of faculty be able to do 100Mbps at least across the US.

Emprirically, it is clear to me that this "trivial" accomplishment, e.g.,
getting 100Mbps across the wide area, is actually quite hard, and it is only
by folks pushing really hard (as Cotrell and his colleagues have) that the
more mundane throughput targets (say, 100Mbps) will routinely be accomplished.

Regards,

Joe St Sauver (joe@oregon.uoregon.edu)
University of Oregon Computing Center

Strange. Why is that? RFC 1323 is widely implemented, although not
widely enabled (and for good reason: the timestamp option kills header
compression so it's bad for lower-bandwidth connections). My guess is
that the OS can't afford to throw around MB+ size buffers for every TCP
session so the default buffers (which limit the windows that can be
used) are relatively small and application programmers don't override
the default.

On Sun, Mar 09, 2003 at 02:25:25PM +0100, Iljitsch van Beijnum quacked:

> you will see that for bulk TCP flows, the median throughput is still only
> 2.3Mbps. 95th%-ile is only ~9Mbps. That's really not all that great,
> throughput wise, IMHO.

Strange. Why is that? RFC 1323 is widely implemented, although not
widely enabled (and for good reason: the timestamp option kills header
compression so it's bad for lower-bandwidth connections). My guess is
that the OS can't afford to throw around MB+ size buffers for every TCP
session so the default buffers (which limit the windows that can be
used) are relatively small and application programmers don't override
the default.

  Which makes it doubly a shame that the adaptive buffer tuning
tricks haven't made it into production systems yet. It was
a beautiful, simple idea that worked very well for adapting to
long fat networks:

  http://www.acm.org/sigcomm/sigcomm98/tp/abs_26.html

  -dave