Satellite latency

I think this question may have been asked before, but what is the minimum
latency and delay I can expect from a satellite connection? What kind of
delay have others seen in a working situation?

As others have pointed out, GEO is about 22,000 miles, plus offsets for both
sending and receiving station, both in lat. and long. There are a lot of
satellites up there, on the equator, spaced every 0.5 degrees or so.

Another latency factor is something not normally factored in land-based
systems.
Doppler shift! Satellites are seldom in perfect orbits - they drift up and
down,
returning to zero (thus in stable orbit), but the relative motion results in
clocking issues.

Satellite modems usually have 8ms receive buffers to accommodate drift, and
when
local clocking is not available, clocking transmit from receive requires at
least
double. On the down-swing, the buffer drains; on the up-swing it fills (both
rates
are slow, but last a long time). Regardless, the buffer itself adds 8ms to
each way
delay.

A good working value is 280ms each way, *plus* any router serialization
delay and
terrestrial backhaul at both ends (unless you live at an earth station).

What factors should be considered in end to end connectivity architecture
when utilizing a satellite link?

If you have end users, put in a huge web cache, as big as you can afford.
Not only will it reduce bandwidth usage, it lowers average latency
considerably.
Doesn't help non-cacheable content, or other applications, directly, but
lowering
usage minimizes latency at least.

Another approach is asymmetric - buy a low-speed terrestrial link for return
path.
Round-trip time is the sum of transmit and receive paths, so reducing even
one
of them helps most applications.

Brian Dickson

Another latency factor is something not normally factored in land-based
systems. Doppler shift! Satellites are seldom in perfect orbits - they
drift up and down, returning to zero (thus in stable orbit), but the
relative motion results in clocking issues.

Satellite modems usually have 8ms receive buffers to accommodate drift,

8 ms is enough to acommodate a 2 ms * c = 600 km drift in both directions.
Unless Kepler is letting me down, this would mean a 30 minute difference
in the time it takes the satellite to orbit the earth. Somehow I'm fairly
sure they manage to keep this much closer to 23 hrs, 56 min.

All of this stuff is only of interest to applications with very strict
bit rate requirements, we can easily do without it for IP.

> What factors should be considered in end to end connectivity architecture
> when utilizing a satellite link?

If you have end users, put in a huge web cache, as big as you can afford.

Hmmm, maybe put one on board the satellite?

Not only will it reduce bandwidth usage, it lowers average latency
considerably. Doesn't help non-cacheable content, or other
applications, directly, but lowering usage minimizes latency at least.

It also helps in the latency * window size = bandwidth problem, because
you get to terminate the TCP connection in a place which is also under
your control, so you can forego slow start, use the TCP high performance
extensions and the large window sizes they make possible between the
client and the proxy.

Another approach is asymmetric - buy a low-speed terrestrial link for return
path. Round-trip time is the sum of transmit and receive paths, so
reducing even one of them helps most applications.

And you can tunnel interactive traffic over the low bandwidth terrestrial
connection and only use the satellite path for bulk.

About half our bandwith is currently satellite and has been for years. We
use it directly to home customers, small (64kb-2Mb) wholesale customers
and for our own pops (over 45Mb).

Latency - 95% of customers never do anything interactive, satellite is
useless for gamers but for ssh and the like it just feels like a dialup
(which I'm using now and which "pauses" for a couple of seconds every now
and then)

TCP Windows - We show home customers how to adjust these but don't bother
for any of our hardware.

Thats about it, the whole thing isn't rocket science, packets go from A to
B and take a bit longer to get their. You have asymetric routing but the
only problems that happen is that US NOC admins run around screaming when
you tell them about it.

Here is how we used to do it:

interface Hssi1/1/1
description Newtec modulator: PAS8:20K, XXXMbps, 12.575GHz, xxxMSym, FECx/x
no ip address
no ip directed-broadcast
encapsulation frame-relay
ip route-cache flow
no ip route-cache distributed
no ip mroute-cache
no keepalive
!
interface Hssi1/1/1.1 point-to-point
description PVC to TIG-AUS-SYD-1.IHUG.NET
ip address 203.109.xx.xx 255.255.255.252
no ip directed-broadcast
no ip mroute-cache
frame-relay interface-dlci 20 CISCO
!

and the same soft of thing in the other end.

We have changed everything to PC-based transmitters and receivers however.

A live sat-routed ip you can ping is 203.109.203.107 . Please don't flood
it or anything though. The return is via ground bandwidth, a comparison
ground routed ip on the same network is 203.98.23.114, please don't abuse
this either.

Simon Lyall wrote (on Feb 27):

Thats about it, the whole thing isn't rocket science, packets go from A to
B and take a bit longer to get their. You have asymetric routing but the
only problems that happen is that US NOC admins run around screaming when
you tell them about it.

I remember trying to exlpain to some US NOC that yes, our satellite uplink
is in London, with a UK company, yes, hosted in London and that yes, the
customer is in Israel, dialng up to an Israeli ISP with an IP address
in Israel that we happen to advertise transit for in London.

And yes, it does work.

This particular bastardisation used a multihop BGP session for the Israeli
ISP to advertise a netblock to us, which we transited. While this session
was in place, traffic towardsd clients went over the satellite. Without it,
it ended up going to Israel, via the clients dialup. I love simple
solutions for resilience.

Honestly. Some people just needed a bit of imagination.

Chris.