File transfer speed between Hong Kong and Johannesburg, South Africa

Hello folks,

Does anyone know what's the average speed for windows file transferring
(SMB2) between Hong Kong and Johannesburg?
Any guide on how to calculate/estimate this?

Thanks.

Regards,

-Luan

Hey Luan,

Here is a good guide that will help you optimise your throughput. As for knowing the average, it all depends on pipe size, network topology, end host configurations to even conjure a guess.

http://bradhedlund.com/2008/12/19/how-to-calculate-tcp-throughput-for-long-distance-links/

-James

A pointer here: http://en.wikipedia.org/wiki/Bandwidth-delay_product

Cheers
Chris

Honestly, this depends on what OS you are using. Anything prior to Win7 you are likely to suffer from the TCP stack. Add in anything weird like ICMP filtering, load balancers or something else that eats the packets you are going to see varying results.

- Jared

Worst case would be that XP is involved, then you're going to be limited by xmodem-like behaviour of SMB, which means you'll get 60 kilobyte of data per RTT.

http://blogs.technet.com/b/josebda/archive/2012/06/06/windows-server-2012-which-version-of-the-smb-protocol-smb-1-0-smb-2-0-smb-2-1-or-smb-3-0-you-are-using-on-your-file-server.aspx says SMB2 is vista and later, so you will probably be able to get higher speeds than that. If you look at 5, then "request compounding" is probably what solved the problem I mentioned earlier.

Unfortunately I dont have further details than this.

He specified SMBv2 so I think you're on track with him being on a Win7 / WinSrv2008 box.

There are a number of variables at play here though, one of which is who the provider's in-between the two locations are and the quality / number of peering points you'd have to cross. If the endpoints were already setup, I'd schedule a number of decently sized (big enough that it takes at least 5 - 10 minutes for scaling to do its thing) test transfers hourly or once every couple hours over the course of a week to get a baseline and understanding of how time of day traffic affects the transfer. If it's live traffic on the other hand, where a user is pulling up some massive file from the other office, you're likely better off looking into a WAN accelerator appliance.

- Matt

Probably quite nasty delays as anything over a few milliseconds delays
really badly affects SMB
around 90 ms it's just about usable and above 120 ms forget it.

have a look at some of the WAN accelerator products esp Aryaka who'll be
able to set you up in minutes with no capital outlay..

http://www.aryaka.com/products/network-as-a-service/global-network/

It all depends on the air speed velocity of an unladen swallow, and varies if it is African or Eurpoean.

In all seriousness, you need to know the speed and latency of the link before that question can be answered.

The maximum you can expect is:

Rate < (MSS/RTT)*(1 / sqrt(p)) where p is the probability of packet loss.

Credit: Mathis, Semke, Mahdavi & Ott in Computer Communication Review, 27
(3), July 1997, titled The macroscopic behavior of the TCP congestion
avoidance algorithm.
( http://www.infoblox.com/community/blog/tcp-performance-and-mathis-equation )

Joe

graycol.gif

Thanks guys.

We do have Riverbed Steelhead appliances at both end.
According to calculation, maximum throughput can be attained is ~330KB/sec.
With the Riverbed "cold" transfer, we should get ~600KB/sec. But I can only
get ~250KB/sec with the Steelhead doing its stuff for 500M file so plenty
of time for whatever to kick in. Iperf and netperf show great results
though.
I guess I will be sampling results hourly for comparison.

Regards,

-Luan

Look at your MTU on links..

Is there a tool to do that end to end? Path MTU discovery tool?
We use GRE/IPSEC with a WS-IPSEC 3 setting the tunnel to MTU 1400 with MSS
= 1360 both end.
The steelhead set to 1400 MTU as well since I was told the steelhead will
set the DF bit.
Steelhead log doesn't show timeout, unable to connect/ retry or anything to
suggest drop packets though.

Thanks.

Regards,

-Luan

I can't think of a worse protocol you could use in this situation. Well
maybe if it were layered on top of IPX...

You may want to consider alternative proposals for handling file transfer
between these two locations. E.g. google docs, office 365, etc.
Alternatively something simple like using mounts which are r/w on one side
but r/o on the other, and using tools like rsync to mirror the r/w on the
local side to the r/o mount on other. Live SMB filesharing is a disaster
over large rtt links.

Nick

Is there a tool to do that end to end? Path MTU discovery tool?

yes: scamper

We use GRE/IPSEC with a WS-IPSEC 3 setting the tunnel to MTU 1400 with MSS
= 1360 both end.

if you're using cisco kit for the ipsec tunnel, I'd recommend the following:

crypto ipsec fragmentation before-encryption
crypto ipsec df-bit clear

if you handle the fragmentation properly, 1360 should be ok for the mtu.

Nick

also check the steelhead isn't getting swamped by too many connections. The
Units are rated at and have a fixed max number of connections per device.If
you need more connections you need a bigger/more costly device.

An old-timers-uncalibrated-guess" If you are going to do large transfers using a protocol designed (for want of a term) for local area networks with near-fault free performance over a long multihop network, you are not going to like it.

What criteria drive the selection of such an unlikely protocol?

Just a note on this thread, we got everything sorted out. There was a
little asymmetric routing going on, but the great folks at HGC was very
quick in helping us fix this.
We had some problem with HGC support at the Hutch before, but they are
great and fast now. At the other end in Johannesburg, we have our sister
company and Ruhann was great with helping us out.
In the end, with the Riverbed we get 1.5MBytes/sec bidirectional for SMB2
(active/active SQL database log shipping). With High-Speed TCP setting, we
could get to 7MBytes/sec.

Regards,

-Luan