# 60 ms cross-continent

Howdy,

Why is latency between the east and west coasts so bad? Speed of light
accounts for about 15ms each direction for a 30ms round trip. Where
does the other 30ms come from and why haven't we gotten rid of it?

c = 186,282 miles/second
2742 miles from Seattle to Washington DC mainly driving I-90

2742/186282 ~= 0.015 seconds

Thanks,
Bill Herrin

Speed of light in a fiber is more like 124K miles per second. It
depends on the refractive index. And of course amplifiers and stuff.

... JG

Speed of light in glass ~200 km/s

100 km rtt = 1ms

Coast-to-coast ~6000 km ~60ms

Tim:>

And of course in your more realistic example:

2742 miles = 4412 km ~ 44 ms optical rtt with no OEO in the path

The speed of light in fiber is only about 2/3 the speed of light in a vacuum, so that 15 ms is really about 22.5 ms. That brings the total to about 45 ms.

Some would come from how many miles of extra glass in that 2,742 miles in the form of slack loops.

Some would come from fiber routes not being straight lines. Allied Fiber’s formerly planned route from the Westin Building to Equinix Ashburn was about 4,464 miles. That’s about 38% longer than your 2,742 miles. Add that 38% to the previous 45 ms and you’re at 62.1 ms.

Besides the refractive index of glass that makes like go about 2/3rds it can in a vacuum, "Stuff" also includes many other things like modulation/demodulation, buffers, etc. I did a quora answer on this you can find at:

Howdy,

Why is latency between the east and west coasts so bad? Speed of light
accounts for about 15ms each direction for a 30ms round trip. Where
does the other 30ms come from and why haven't we gotten rid of it?

c = 186,282 miles/second

This is c in a vacuum. Light transmission through a medium is slower. In the case of an optical fiber about 31% slower.

My lowest latency transit paths Palo Alto to the ashburn area are around 58ms. the great circle route for the two dcs involved is a distance 2408 miles which gives you a 39.6ms Lower bound.

The path isn’t quite a straight as that, but if you eliminate the 6 routers in the path and count up the oeo regens I’m sure you can account most of the extra in the form of distance.

Doing some rough back of the napkin math, an ultra low-latency path from, say, the Westin to 1275 K in Seattle will be in the 59 ms range. This is considerably longer than the I-90 driving distance would suggest because:

• Best case optical distance is more like 5500 km, in part because the path actually will go Chicago-NJ-WDC and in part because a distance of 5000 km by right-of-way will be more like 5500 km when you account for things like maintenance coils, in-building wiring, etc.
• You’ll need (at least) three OEO regens on that distance, since there’s no value in spending 5x to deploy an optical system that wouldn’t need to (like the ones that would manage that distance subsea). This is in addition to ~60 in-line amplification nodes, although that adds significantly less latency even in aggregate

Some of that is simply due to cost savings. In theory, you could probably spend a boatload of money to build a route that cuts off some of the distance inefficiency and gets you closer to 4500 km optical distance with minimal slack coil, and maybe no regens, so you get a real-world performance of 46 ms. But there are no algo trading sites of importance in DC, and for everybody else there’s not enough money in the difference between 46 and 59 ms for someone to go invest in that type of deployment.

Dave Cohen
craetdave@gmail.com

An intriguing development in fiber optic media is hollow core optical fiber, which achieves 99.7% of the speed of light in a vacuum.

https://www.extremetech.com/computing/151498-researchers-create-fiber-network-that-operates-at-99-7-speed-of-light-smashes-speed-and-latency-records

-mel

And thus far, no one has mentioned switching speed and other
electronic overhead such as the transceivers (that's the big one,
IIRC.)

I also don't recall if anyone mentioned that the 30ms is as the
photon flies, not fiber distance.

-Wayne

This will be something from tens of meters (low lat swich), to few
hundred meters (typical pipeline), to 2km delay (NPU+FAB+NPU) per
active IP device. If that is a big one, I guess it depends, cross
atlantic, no, inside rack, maybe.

Grüße, Carsten

Hello,

Taking advantage of this thread may I ask something?. I have heard of “wireless fiber optic”, something like an antenna with a laser pointing from one building to the other, having said this I can assume this link with have lower RTT than a laser thru a fiber optic made of glass?

Thanks,

Alejandro,

This was also pitched as one of the killer-apps for the SpaceX
Starlink satellite array, particularly for cross-Atlantic and

https://blogs.cfainstitute.org/marketintegrity/2019/06/25/fspacex-is-opening-up-the-next-frontier-for-hft/

"Several commentators quickly caught onto the fact that an extremely
expensive network whose main selling point is long-distance,
low-latency coverage has a unique chance to fund its growth by
addressing the needs of a wealthy market that has a high willingness

Regards
Marshall

I think he might be referring to the newer modulation types (QAM) on long haul
transport. There's quite a bit of time in uS that the encoding takes into QAM
and adding FEC. You typically won't see this at the plug-able level between
switches and stuff.

60ms is nothing really, and I'm happy I don't need to play in the HFT space
anymore. I do wish my home connection wasn't 60 ms across town as spectrum
wants takes TPA-ATL-DCA-DEN-NY to get to my rack.

Hello,

Taking advantage of this thread may I ask something?. I have heard of “wireless fiber optic”, something like an antenna with a laser pointing from one building to the other, having said this I can assume this link with have lower RTT than a laser thru a fiber optic made of glass?

See: Terrabeam from about the year 2000.

And thus far, no one has mentioned switching speed and other
electronic overhead such as the transceivers (that’s the big one,
IIRC.)
This will be something from tens of meters (low lat swich), to few
hundred meters (typical pipeline), to 2km delay (NPU+FAB+NPU) per
active IP device. If that is a big one, I guess it depends, cross
atlantic, no, inside rack, maybe.

I think he might be referring to the newer modulation types (QAM) on long haul
transport. There’s quite a bit of time in uS that the encoding takes into QAM
and adding FEC. You typically won’t see this at the plug-able level between
switches and stuff.

60ms is nothing really, and I’m happy I don’t need to play in the HFT space
anymore. I do wish my home connection wasn’t 60 ms across town as spectrum
wants takes TPA-ATL-DCA-DEN-NY to get to my rack.

working on that …

Did you not read my posting on Quora?

Tim

FEC is low tens of meters (i.e. low tens of nanoseconds), QAM is less.
Won't impact the pipeline or NPU scenarios meaningfully, will impact
the low latency scenario.

Here's an update from 7 years after that article which hints at the
downside of hollow core fibre:

https://phys.org/news/2020-03-hollow-core-fiber-technology-mainstream-optical.html

It sounds like attenuation was a big problem: "in the space of 18 months
the attenuation in data-transmitting hollow-core fibers has been reduced
by over a factor of 10, from 3.5dB/km to only 0.28 dB/km within a factor
of two of the attenuation of conventional all-glass fiber technology."

Tony.