Important New Requirement for IPv4 Requests

Date: Fri, 24 Apr 2009 19:05:26 +1200
From: Perry Lorier <perry@coders.net>

>
>
> Large data sets? So you are saying that 512-byte packets with no
> windowing work better? Bill, have you measured this?
>
> Time to download a 100mb file over HTTP and a 100mb interface: 20
> seconds.
> Time to download a 100mb file over FTP and a 100mb interface: ~7 minutes.
>
> And yes, that was FreeBSD with the old version openssl library that
> shipped with 6.3.
>

As someone who copies large network trace files around a bit, 100MB at
100mb, over what I presume is a local (low latency) link is barely a
fair test. Many popular web servers choke on serving files >2GB or >4GB
in size (Sigh). I'm in New Zealand. It's usually at least 150ms to
anywhere, often 300ms, so I feel the pain of small window sizes in
popular encryption programs very strongly. Transferring data over high
speed research networks means receive windows of at least 2MB, usually
more. When popular programs provide their own window of 64kB, things
get very slow.

Very few people (including some on this list) have much idea of the
difficulty in moving large volumes of data between continents,
especially between the Pacific (China, NZ, Australia, Japan, ...) and
either Europe or North America.

Getting TCP bandwidth over about 1Gbps is very difficult. Getting over
5G is nearly impossible. I can get 5Gbps pretty reliably with tuned end
systems over a 100 ms. RTT, but that drops to about 2G at 200 ms.

A good web site to read a bout getting fast bulk data transfers is:

It is aimed at DOE and DOE related researchers, but the information is
valid for anyone needing to move data on a Terabyte or greater scale
over long distances. We move a LOT of data between our facilities at
FermiLab in Chicago and Brookhaven in New York and CERN in
Europe. A Terabyte is just the opener for that data.

Also, if you see anything that needs improvement or correction, please
let me know.

A good web site to read a bout getting fast bulk data transfers is:
http://fasterdata.es.net

indeed

mtu clue is also useful. here on tokyo b-flets, and i would guess in
many other ppoe environments, you need to tune or lose big-time.

randy

Randy Bush <randy@psg.com> writes:

mtu clue is also useful. here on tokyo b-flets, and i would guess in
many other ppoe environments, you need to tune or lose big-time.

But not difficult to beneficially MiM:

in pf:
scrub in on gre0 max-mss 1400
scrub out on gre0 max-mss 1400

in cisco-land:
ip tcp adjust-mss 1400

i'm sure the linux folks can offer up something similar...

-r

Default MSS for most linux is 0, which causes the kernel to calculate it as the interface MTU-40bytes. You can either change the MTU on the interface or more specifically use the 'ip route <ipblock> dev <interface> advmss <new mss>' command to update it on a per route basis.

~J