923 Mbps across the Ocean ...

to be fair (as I was first to flame!) it is presented out of context in a poorly
written and somewhat misleading news article

Steve

Fortunately, these days there are very few production
networks press-releasing the size of their ISPnesses.

  Sean. (mine's bigger than yours, anyway)

Ok, how about a Ferrari full of DATs, on an Autobahn? :slight_smile:

"Given enough thrust, pigs fly just fine...demonstrated by
   a professional driver on a closed track, please do not try
   this at home kids!"

  Sure, given a link you don't have to share with production
traffic and a lot of charity, it's possible to get TCP to do a lot
of things. This doesn't make them a good idea (outside of those
`special' environments.)

  On the other hand, if you have the need for this kind of
single stream performance, and the pipe to yourself, why not
devise your own protocol with less overhead?

  --msa

Because then you'll violate the rules of the contest. :slight_smile:

http://lsr.internet2.edu

Andrew

Date: Sat, 8 Mar 2003 00:51:51 -0500 (EST)
From: Andrew Dorsett

> On the other hand, if you have the need for this kind of
> single stream performance, and the pipe to yourself, why
> not devise your own protocol with less overhead?

Because then you'll violate the rules of the contest. :slight_smile:

http://lsr.internet2.edu

Hmmmm. Looks like someone could use _really_ big buffers and
insane SACK. Knowing the pipe isn't being shared with other
traffic, one can "tune" backoff and slow-start without worry
about being cooperative...

Yeah, it's still TCP. A sprint car with 250 deg @ 0.050" lift
camshaft, 5.13:1 rear gears, and different left/right tire sizes
is still a car. Both are about as useful in the real world.

IOW, it's fun, but the focus is too narrow and certain parameters
are totally incompatible with production requirements.

I'd like to see a contest that attempts to maximize throughput
_and_ simultaneous session count using a random mix of simulated
client pipe sizes.

Eddy

There is nothing wrong with trying to set speed records, trying to push
tcp performance to its limits and maybe beyond, or holding contests to do
any of the above.

I think the objections here are three-fold:

A) The amount of arrogance it takes to declare a land speed "record" when
   there are people out there doing way more than this on a regular basis.

B) The extreme wastefulness of spending a million dollars to do it.

C) The incredible (well ok maybe not that incredible, expected is more
   like it) lack of accuracy in the reporting of this story.

Date: Sat, 8 Mar 2003 01:25:17 -0500
From: Richard A Steenbergen

I think the objections here are three-fold:

"Researchers take advantage of ideal conditions and huge funding
to do a fraction of what network engineers do every day" just
doesn't help ratings. This is the same mass media that predicted
the end of the world when our calendars turned 2000.

Alas, I suppose a thread about "American mass media stinks" would
be just about as revolutionary as that on which they "reported".
The difference is that NANOG posts are cheaper and at least
somewhat more accurate.

Eddy

Dave Israel wrote:

There's no real "science" here. This is a geek publicity stunt.

s/geek/funding/

Peter

Single stream at 900mbs over that distance? Where?

Jason

Production commercial networks need not apply, 'lest someone realize that
they blow away these speed records on a regular basis.

Please document it so as to shame these I2 networks. Somehow, I doubt you will be able to.

-Hank

> Production commercial networks need not apply, 'lest someone realize that
> they blow away these speed records on a regular basis.

What kind of production environment needs a single TCP stream of data
at 1 gigabit/s over a 150ms latency link?

Just the fact that you need a ~20 megabyte TCP window size to achieve this
(feel free to correct me if I'm wrong here) seems kind of unusal to me.

See: TCP Tune | PSC for many details of tuning your TCP stack. Interesting that most of the links lead to places like PSC, VT, ANL, ORNL, UofHannover and NLANR. If commercial ISPs have been doing this stuff "on a regular basis", please let us know where it is documented since it is a bit well hidden.

-Hank

10 years ago there was no www. No HTML. 10 years from now will find us using something we have not yet thought of and at speeds that today look as ridiculous as 100Mb/sec looked to the guys on the T1 NSFnet a bunch of years ago.

Problem is that TCP comes up against a wall. I have seen all too often ISPs in Europe contend with 150ms RTT and some user trying to do 30Mb/sec single TCP and not being able to even come close. In the US, where your general RTT is much lower, you haven't hit that wall just yet. But it will come. Then all the research that Internet2 and Geant have been doing at sites such as:
http://p2p.internet2.edu/
http://www.web100.org/
http://www.researchchannel.org/tech/ihdtv.asp
http://e2epi.internet2.edu/
will benefit all the commercial ISPs.

-Hank

Thus spake "Hank Nussbacher" <hank@att.net.il>

>Production commercial networks need not apply, 'lest someone
>realize that they blow away these speed records on a regular basis.

Please document it so as to shame these I2 networks. Somehow, I
doubt you will be able to.

Internet/2 is not interesting because it has big pipes; the public Internet
has much bigger pipes and more of them. I/2 is interesting only because it
has fewer users -- by two or three orders of magnitude -- and most/all of
these users are connected by FastE or better.

However, there is no need to waste funding buying uber-fast routers or GigE
links around the globe just to learn how to tune stacks or apps. If
high-speed TCP research is what you're doing, rig up a latency generator in
your laboratory and do your tests that way, just like the TCPSAT folks.
Spending millions of (probably taxpayer) dollars to win a meaningless record
is unethical, IMHO.

S

Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking

Talk to folks that deal with radio telescopes.

Alex

I think I understood the point you are trying to make
here but just like to set the record straight.

10 years ago, there was www/html already. I was a
visiting engineer at CERN's networking division
between end of '92 and 6/93. www (port:80)traffic had
already been flowing in the net there. One of my
projects was to collect traffic volume by port and do
some analysis of the network there. The volume of
port:80 traffic was not high at the time but was
definitely there.

cheers!

                           --Jessica