Why is latency between the east and west coasts so bad? Speed of light
accounts for about 15ms each direction for a 30ms round trip. Where
does the other 30ms come from and why haven't we gotten rid of it?
c = 186,282 miles/second
2742 miles from Seattle to Washington DC mainly driving I-90
The speed of light in fiber is only about 2/3 the speed of light in a vacuum, so that 15 ms is really about 22.5 ms. That brings the total to about 45 ms.
Some would come from how many miles of extra glass in that 2,742 miles in the form of slack loops.
Some would come from fiber routes not being straight lines. Allied Fiber’s formerly planned route from the Westin Building to Equinix Ashburn was about 4,464 miles. That’s about 38% longer than your 2,742 miles. Add that 38% to the previous 45 ms and you’re at 62.1 ms.
Besides the refractive index of glass that makes like go about 2/3rds it can in a vacuum, "Stuff" also includes many other things like modulation/demodulation, buffers, etc. I did a quora answer on this you can find at:
Why is latency between the east and west coasts so bad? Speed of light
accounts for about 15ms each direction for a 30ms round trip. Where
does the other 30ms come from and why haven't we gotten rid of it?
c = 186,282 miles/second
This is c in a vacuum. Light transmission through a medium is slower. In the case of an optical fiber about 31% slower.
My lowest latency transit paths Palo Alto to the ashburn area are around 58ms. the great circle route for the two dcs involved is a distance 2408 miles which gives you a 39.6ms Lower bound.
The path isn’t quite a straight as that, but if you eliminate the 6 routers in the path and count up the oeo regens I’m sure you can account most of the extra in the form of distance.
Doing some rough back of the napkin math, an ultra low-latency path from, say, the Westin to 1275 K in Seattle will be in the 59 ms range. This is considerably longer than the I-90 driving distance would suggest because:
Best case optical distance is more like 5500 km, in part because the path actually will go Chicago-NJ-WDC and in part because a distance of 5000 km by right-of-way will be more like 5500 km when you account for things like maintenance coils, in-building wiring, etc.
You’ll need (at least) three OEO regens on that distance, since there’s no value in spending 5x to deploy an optical system that wouldn’t need to (like the ones that would manage that distance subsea). This is in addition to ~60 in-line amplification nodes, although that adds significantly less latency even in aggregate
Some of that is simply due to cost savings. In theory, you could probably spend a boatload of money to build a route that cuts off some of the distance inefficiency and gets you closer to 4500 km optical distance with minimal slack coil, and maybe no regens, so you get a real-world performance of 46 ms. But there are no algo trading sites of importance in DC, and for everybody else there’s not enough money in the difference between 46 and 59 ms for someone to go invest in that type of deployment.
This will be something from tens of meters (low lat swich), to few
hundred meters (typical pipeline), to 2km delay (NPU+FAB+NPU) per
active IP device. If that is a big one, I guess it depends, cross
atlantic, no, inside rack, maybe.
Taking advantage of this thread may I ask something?. I have heard of “wireless fiber optic”, something like an antenna with a laser pointing from one building to the other, having said this I can assume this link with have lower RTT than a laser thru a fiber optic made of glass?
"Several commentators quickly caught onto the fact that an extremely
expensive network whose main selling point is long-distance,
low-latency coverage has a unique chance to fund its growth by
addressing the needs of a wealthy market that has a high willingness
to pay — high-frequency traders."
I think he might be referring to the newer modulation types (QAM) on long haul
transport. There's quite a bit of time in uS that the encoding takes into QAM
and adding FEC. You typically won't see this at the plug-able level between
switches and stuff.
60ms is nothing really, and I'm happy I don't need to play in the HFT space
anymore. I do wish my home connection wasn't 60 ms across town as spectrum
wants takes TPA-ATL-DCA-DEN-NY to get to my rack.
Taking advantage of this thread may I ask something?. I have heard of “wireless fiber optic”, something like an antenna with a laser pointing from one building to the other, having said this I can assume this link with have lower RTT than a laser thru a fiber optic made of glass?
And thus far, no one has mentioned switching speed and other
electronic overhead such as the transceivers (that’s the big one,
IIRC.)
This will be something from tens of meters (low lat swich), to few
hundred meters (typical pipeline), to 2km delay (NPU+FAB+NPU) per
active IP device. If that is a big one, I guess it depends, cross
atlantic, no, inside rack, maybe.
I think he might be referring to the newer modulation types (QAM) on long haul
transport. There’s quite a bit of time in uS that the encoding takes into QAM
and adding FEC. You typically won’t see this at the plug-able level between
switches and stuff.
60ms is nothing really, and I’m happy I don’t need to play in the HFT space
anymore. I do wish my home connection wasn’t 60 ms across town as spectrum
wants takes TPA-ATL-DCA-DEN-NY to get to my rack.
FEC is low tens of meters (i.e. low tens of nanoseconds), QAM is less.
Won't impact the pipeline or NPU scenarios meaningfully, will impact
the low latency scenario.
It sounds like attenuation was a big problem: "in the space of 18 months
the attenuation in data-transmitting hollow-core fibers has been reduced
by over a factor of 10, from 3.5dB/km to only 0.28 dB/km within a factor
of two of the attenuation of conventional all-glass fiber technology."