Geosynchronous orbit is about 36,000 km from the center of the earth.
Round-trip to the satellite is ~72,000 km; the speed of light is
300,000 km/sec. That works out to 240 milliseconds at the minimum for
one-way packet delivery.
--Steve Bellovin, http://www.research.att.com/~smb
Full text of "Firewalls" book now at http://www.wilyhacker.com
In a message written on Tue, Feb 26, 2002 at 09:07:14PM -0500, Steven M. Bellovin wrote:
Geosynchronous orbit is about 36,000 km from the center of the earth.
Round-trip to the satellite is ~72,000 km; the speed of light is
300,000 km/sec. That works out to 240 milliseconds at the minimum for
one-way packet delivery.
Remember that a geosynchronous satellte must orbit the equator.
Let's say for the sake of argument it's over mexico, you're in New
York, and the downlink station is in San Diego. The 36,000 is the
distance straight "down" to mexico, It's probably more like 50,000
to New York, and 45,000 to San Diego. And if you're in New York,
and your mail server is in New York, but the downlink was to San
Diego, you've got another 4,000 across country. Now you're up
closer to 100,000km.
Add to this some inefficient encoding done on satellites, and most
(consumer) systems using a broadcast medium that can buffer packets
and you see why people report 1 second RTT's with services like
StarBand.
It's better than nothing, but it's a rough primary connection.
in my experience, the "normal" latency is somewhere around 600-800 ms
(ping times).
however, you should also be aware of issues related to TCP window sizes.
due to the windowing mechanism between the sending system (say a web server
in a farm connected with multiple OC192 connections) and the receiving
system (say a PC connected to a broadband infrastructure like @home), an
individual TCP session (like SMTP) will reach a limit of throughput which
means that even with a DS3 satelite connection, an individual TCP session won't
see more than something like 600Kbps throughput (i forget the actual number)
if there is a satelite connection somewhere in the middle.
one time a client was complaining that they couldn't do 2Mbit ftp's on their
E1 satelite connection. they were under the impression that the link itself
was being throttled.
in order to demonstrate that the link itself could do 2mbit, i set up 10 or
20 concurrent ssh scp's, and then showed that the interface was doing an
aggregate of 2mbit (well, a bit shy of that).
if you adjust the window size on the sending and receiving systems, you can
improve this, but this solution is impractical, as you would need to get
everyone on the internet (or at least all of the webservers and websurfers
you are servicing) to make adjustments to their local TCP stack.
there are 3rd party solutions which can improve the throughput, but even
with those, there are still speed of light issues which will cause individual
throughput limitations.
RFC1323 support is a 3rd party solution, or does it not solve all the problem
here?
its been a while since i looked at it, but i seem to recall there was a lack
of implementation/adhereance to that RFC in windows TCP stacks.
i think for RFC1323 to be effective, it needs to be working on the sending
and receiving systems, not just the intermediary routers.
its been a while since i looked at it, but i seem to recall there was a lack
of implementation/adhereance to that RFC in windows TCP stacks.
I don't think that has been the case for a while, now.
i think for RFC1323 to be effective, it needs to be working on the sending
and receiving systems, not just the intermediary routers.
RFC1323 can only be supported on TCP endpoints, so there's nothing you can or should do on intermediary routers.
There are good descriptions of general satellite transmission characteristics for IP together with a recipe book of mechanisms which can improve TCP performance in RFC2488. RFC2760 may also be interesting.
Joe
Remember that a geosynchronous satellte must orbit the equator.
Let's say for the sake of argument it's over mexico, you're in New
York, and the downlink station is in San Diego. The 36,000 is the
distance straight "down" to mexico, It's probably more like 50,000
to New York, and 45,000 to San Diego. And if you're in New York,
and your mail server is in New York, but the downlink was to San
Diego, you've got another 4,000 across country. Now you're up
closer to 100,000km.<<
Not wanting to get picky about ~20,000 km., but the maximum -usable- slant
path is ~41,000 km.
--Michael
In a message written on Tue, Feb 26, 2002 at 09:07:14PM -0500, Steven M.
Bellovin wrote:
Jim Mercer wrote:
if you adjust the window size on the sending and receiving
systems, you can
improve this, but this solution is impractical, as you would
need to get
everyone on the internet (or at least all of the webservers
and websurfers
you are servicing) to make adjustments to their local TCP stack.
The receiver is the one that informs the sender how large of a window it
can accept, so it can be practical for a subscriber installation. It
wouldn't be a good idea to park a bunch of servers behind one of these
links, but any receiving node that set its TCP receive window to 2x the
byte/sec capacity of the link should see decent throughput.
Tony