Fast TCP?

Does anybody know any more about Fast TCP:

http://story.news.yahoo.com/news?tmpl=story&cid=581&ncid=581&e=6&u=/nm/20030604/tc_nm/technology_internet_dc_3

Is it real?

It it open source?

Are there any implementations available?

Mike.

+----------------- H U R R I C A N E - E L E C T R I C -----------------+

Here's the white paper detailing it:

http://netlab.caltech.edu/pub/papers/fast-030401.pdf

Here is their home page:

http://netlab.caltech.edu/FAST

It doesn't look like they have production code available at this point,
but it looks like it could be interesting.

allan

IMHO, the way the article reads it sounds like an implementation of
dynamic window sizing.

Regards,
Christopher J. Wolff, VP CIO
Broadband Laboratories, Inc.
http://www.bblabs.com

Glad this came up as I have been reading this paper -

Does Figure 1 in

http://netlab.caltech.edu/pub/papers/fast-030401.pdf

seem reasonable ? Will 100 RED TCP flows really only fill 90% of a 155 Mbps pipe but 87% of a 2.4 Gbps connection
and 75% of a 4.8 Gbps connection ? This seems strangely non-linear to me.

A more fundamental question is, is this really useful except in the case of very high bandwidth single flows (such as
e-VLBI or particle physics or uncompressed HDTV).
After all, isn't the current standard practice not to come close to fully utilizing backbone bandwidth ?

                                  Regards
                                  Marshall Eubanks

Does anybody know any more about Fast TCP:

Yahoo News: Latest and Breaking News, Headlines, Live Updates, and More

Is it real?

It it open source?

Are there any implementations available?

Here's the white paper detailing it:

http://netlab.caltech.edu/pub/papers/fast-030401.pdf

Here is their home page:

http://netlab.caltech.edu/FAST

It doesn't look like they have production code available at this point,
but it looks like it could be interesting.

allan
--
Allan Liska
allan@allan.org
http://www.allan.org

T.M. Eubanks
e-mail : tme@multicasttech.com
http://www.multicasttech.com

Test your network for multicast :
http://www.multicasttech.com/mt/
Our New Video Service is in Beta testing
http://www.americafree.tv

Glad this came up as I have been reading this paper -

Does Figure 1 in
> http://netlab.caltech.edu/pub/papers/fast-030401.pdf

seem reasonable ? Will 100 RED TCP flows really only fill 90% of a 155
Mbps pipe but 87% of a 2.4 Gbps connection
and 75% of a 4.8 Gbps connection ? This seems strangely non-linear to
me.

A more fundamental question is, is this really useful except in the
case of very high bandwidth single flows (such as
e-VLBI or particle physics or uncompressed HDTV).
After all, isn't the current standard practice not to come close to
fully utilizing backbone bandwidth ?

I think the idea is that (similar to the 1Gb/s single-stream test a few
months ago) that the concerns of academics are not exactly inline with those
of network operators. The idea with a non-stablized TCP Vegas on a very fast
pipe [with a small number of streams] is that as delays get large (relative
to the size of the network connection) you have a very long/impossible
window to grow into to fully utilize the full bandwidth. With TCP Reno
(which it seems they have the biggest fault with) a single packet drop
causes far more severe problems. Since RED causes packet drops, high speed
streams that get RED'd are in an immense world of pain. Further, since a
typically delayed ack window is only 100ms, this is a lot of data that isn't
transmitted over the network or retransmitted and resequenced.

If you have many streams (where each one represents a small portion of your
network link, whether backbone or CPE) you can easily fill your pipe, this
is common experience. If you aren't using RED [or similar] to manage
congestion, you are good with a smaller number of streams. When you have a
single (or small number of streams) you need larger windows, more tolerance
for latency, and a large willingness to buffer data rather than drop it. I
think this is all well understood at a common-sense level.

I think the academics (practice, not people) are the ones that will figure
out some idealized set of variables for a slightly modified equation from
the ones we all use wrt to bits-in-flight calculations. I think they mention
in the paper that they will start by stablizing TCP Vegas for a high
latency, high speed link. I could be wrong (about my understanding or what
is considered common-sense).

I am not sure why sending a single large/high speed stream today (>1Gb/s) is
such an improvement over sending multiple today-streams of data, but I guess
that is the difference between a get-it-done-right and a get-it-done-now
mentality.

Deepak Jain
AiNET

The bot-owners would tend to disagree. This will improve their kill ratio without having to significantly increase the size of their bot-herds. Now we can have someone be the receipent of some FAST-love.

-Hank

Hi, NANOGers.

Did someone say...bot? /me twitches :slight_smile:

] I am not sure why sending a single large/high speed stream today (>1Gb/s) is
] such an improvement over sending multiple today-streams of data, but I guess
] that is the difference between a get-it-done-right and a get-it-done-now
] mentality.

For those who herd bots, this in theory provides the capability to
get-it-done-right *AND* get-it-done-now. :confused:

Thanks,
Rob.

I am not sure why sending a single large/high speed stream today (>1Gb/s) is
such an improvement over sending multiple today-streams of data, but I guess
that is the difference between a get-it-done-right and a get-it-done-now
mentality.

Because us RE network operators have customers, especially in the
astronomy field, that want to push 1gbit streams in realtime from
various radio telescopes all over Europe. Moreover, they want them
to end up in one place, ie. converge :wink:

So, we need to come up with technolgies that can sustain multi-gbit
(preferably) TCP streams over 50-100 mS RTT links. And, we've got the
OC192 backbones to do it, if TCP were up to it..

Date: Thu, 5 Jun 2003 08:02:33 +0200
From: Mans Nilsson

So, we need to come up with technolgies that can sustain
multi-gbit (preferably) TCP streams over 50-100 mS RTT
links. And, we've got the OC192 backbones to do it, if TCP
were up to it..

10 Gbps * 100 ms * 2 = 2 Gbit = 1/4 Gbyte

I guess one can run huge windows, insane SACK, eschew anything
resembling slow-start, modify the recovery algorithm, and still
call it TCP as long as it fits in an IP protocol #6 packet.

Of course, in the absence of bw*delay-based autotuning, I suppose
servers should have plenty of mbuf memory. :wink: Oh, wait, a few
thousand slow-moving TCP streams could nuke a server without
harming the clients, so slow start still is an issue.

Also witness the BGP data/keepalive mechanism. Messages are sent
at least every <x so often>, and frequently contain data (or at
least a keepalive instead of data). If ACKs were sent in the
same way, and packet fragments could be passed to the application
layer before all segments were received in order to alleviate
mbuf issues...

UDP, anyone?

Eddy

In some experience I've had RED did not cause drops. In fact, I have
some data showing how drops increased without RED.

  <http://condor.depaul.edu/~jkristof/red/>

I'd like to see (or actually perform them myself if I could :slight_smile: some
actual tests. If anyone has any updated data doing AQM on high speed
links or large streams, please post pointers.

John

Hello;

e-VLBI streams can easily sustain packet losses. IMHO these streams should be sent
UDP with application layer congestion control, minimal FEC if necessary and "worse than best effort"
QOS (because VLBI has little money but an almost infinite ability to generate bits). These TCP based tools may be
useful for other applications, but I do not think that they are the right path for e-VLBI.

Regards
Marshall

Subject: RE: Fast TCP? Date: Wed, Jun 04, 2003 at 11:41:22PM -0400 Quoting Deepak Jain (deepak@ai.net):

I am not sure why sending a single large/high speed stream today (>1Gb/s) is
such an improvement over sending multiple today-streams of data, but I guess
that is the difference between a get-it-done-right and a get-it-done-now
mentality.

Because us RE network operators have customers, especially in the
astronomy field, that want to push 1gbit streams in realtime from
various radio telescopes all over Europe. Moreover, they want them
to end up in one place, ie. converge :wink:

So, we need to come up with technolgies that can sustain multi-gbit
(preferably) TCP streams over 50-100 mS RTT links. And, we've got the
OC192 backbones to do it, if TCP were up to it..

--
M�ns Nilsson Systems Specialist
+46 70 681 7204 KTHNOC
                        MN1334-RIPE

... bleakness ... desolation ... plastic forks ...
<mime-attachment>

T.M. Eubanks
e-mail : tme@multicasttech.com
http://www.multicasttech.com

Test your network for multicast :
http://www.multicasttech.com/mt/
Our New Video Service is in Beta testing
http://www.americafree.tv

[snip]

Also witness the BGP data/keepalive mechanism. Messages are sent
at least every <x so often>, and frequently contain data (or at
least a keepalive instead of data). If ACKs were sent in the
same way, and packet fragments could be passed to the application
layer before all segments were received in order to alleviate
mbuf issues...

UDP, anyone?

The folks over at digitalfountain have a pretty spiffy product that does kind
of a UDP encapsulation between their end points (TCP -> df box -> UDP -> df
box -> TCP) which works quite well for fast transfers over high latency
links (satellite, etc.). Also lets you get the most out of your available
pipes (i.e. 95% utilization of a DS-3 vs. a much lower figure using TCP to
transfer the same amount of data).

(I'm not affiliated with digitalfountain in any way other than being a
customer and sharing an office with a beta tester. :))

The bot-owners would tend to disagree. This will improve their
kill ratio
without having to significantly increase the size of their
bot-herds. Now
we can have someone be the receipent of some FAST-love.

Now, now... I am not all that pessimistic. We only need our FAST-IDS and our
FAST-ACLs and our FAST-Firewalls to handle
the possibility that a single stream could fill a large majority of the
network connections today.

Deepak Jain
AiNET