I'm looking for input on the best practices for sending large files over a long fat pipe between facilities (gigabit private circuit, ~20ms RTT).
I'd like to avoid modifying TCP windows and options on end hosts where possible (I have a lot of them). I've seen products that work as "transfer stations" using "reliable UDP" to get around the windowing problem.
I'm thinking of setting up servers with optimized TCP settings to push big files around data centers but I'm curious to know how others deal with LFN+large transfers.
In our experience, you can't get to line speed with over 20-30ms of latency using TCP regardless of how much you tweak it. We transfer files across the US with 60-70ms at line speeds with UDP based file transfer programs. There are a number of open source projects out there designed for this purpose.
-Robert
Tellurian Networks - Global Hosting Solutions Since 1995 http://www.tellurian.com | 888-TELLURIAN | 973-300-9211
"Well done is better than well said." - Benjamin Franklin
I'm looking for input on the best practices for sending large
files
There are both commercial products (fastcopy)
and various "free"(*) products (bbcp, bbftp,
gridftp) that will send large files. While
they can take advantage of larger windows
they also have the capability of using multiple
streams (dealing with the inability to tune the
tcp stack). There are, of course, competitors
to these products which you should look into.
As always, YMWV.
I'm looking for input on the best practices for sending large files over
a long fat pipe between facilities (gigabit private circuit, ~20ms RTT).
providing you have RFC1323 type extensions enabled on a semi-decent OS, a 4MB
TCP window should be more than sufficient to fill a GbE pipe over 30msec.
with a modified TCP stack, that uses TCP window sizes up to 32MB, i've worked
with numerous customers to achieve wire-rate GbE async replication for
storage-arrays with FCIP.
the modifications to TCP were mostly to adjust how it reacts to packet loss,
e.g. don't "halve the window".
the intent of those modifications is that it doesn't use the "greater internet"
but is more suited for private connections within an enterprise customer
environment.
that is used in production today on many Cisco MDS 9xxx FC switch environments.
I'd like to avoid modifying TCP windows and options on end hosts where
possible (I have a lot of them). I've seen products that work as
"transfer stations" using "reliable UDP" to get around the windowing
problem.
given you don't want to modify all your hosts, you could 'proxy' said TCP
connections via 'socat' or 'netcat++'.
Since I got on shift two hours ago, I've done nothing but stare at
traceroutes into and out of Comcast space trying to reassure dozens of
customers that we're not down, Comcast is having problems. Our upstream
claims they've been dealing with Comcast customers all (US) day. I'm pretty
sure there's some serious weirdness going on in there.
(Oh, and don't reply to an existing message to start a new thread)
I have Smokeping running from behind my Comcast (Eastern MA / New
England) and have alarms of latency from 6:28P-7:18pm EST. Not sure if
attachments make it through - but doc of last 3 hr graph showing .13%
loss avg and 4.57% max loss. Seems clean otherwise. On Tuesday I had
alarms going all day long.....I run monitors to my Corp Network and
Legacy Genuity DNS and the results are the same for both.
Tom,
Where would that be located. From my house my UUNet/MCI/Verizon Business
Link doesn't have it. My speakeasy link doesn't have it either. All of
Comcast was out in my neighborhood (Alexandria, VA) yesterday at 7pm when I
got home, was still out at 11pm when I went to bed, up and running fine this
AM.
They've been fine in my area (atlanta), though there was a fair bit of
downtime last week. I did, however, notice today that my port 25 blocks
are gone... which wasn't the case last week.