Identifying submarine links via traceroute

Hello,

I am a researcher at the University of Wisconsin. My colleagues at Northwestern University and I are studying submarine cable infrastructure.

Our interest is in identifying submarine links in traceroute measurements. Specifically, for a given end-to-end traceroute measurement, we would like to be able to identify when two hops are separated by a submarine cable. Our initial focus has been on inter-hop latency, which can expose long links. The challenge is that terrestrial long-haul links may have the same or longer link latencies as short submarine links. So, we’re interested in whether there may be other features (e.g., persistent congestion, naming conventions in router interfaces, peering details, etc.) or techniques that would indicate submarine links.

Any thoughts or insights you might have would be greatly appreciated - off-list responses are welcome.

Thank you.

Regards, PB

Paul Barford
University of Wisconsin - Madison

Nice challenge.

Check out infrapedia.com where you can see length of cables and this may help you “guess” latency but given so many cables are within 5-10ms in some paths there may be false positives

A very good topic to work on.

Back in the day, when submarine cables were not as rife, it was not uncommon to see things like “FLAG” or “APCN-2” or “SMW-3” in traceroutes. I haven’t seen such in a very long time, but likely some operators may still do this. For traceroutes that cross oceans visibly, e.g., lhr-jfk, mrs-mba, hnd-lax, mru - cdg, e.t.c., you could glean from there. But many operators do not follow any “common norm” to annotate things like this, so YMMV. You also find some countries that will use a submarine festoon either as a primary or backup route for a terrestrial link. In such cases, the distances may be the same, or even shorter across the festoon, e.g., consider a festoon cable between Cape Town - Durban, vs. a land-based run for the same two points. Considering how wide-spread submarine links are for both short and long spans, I think folk are simply treating them as any other link, from an operational perspective. You may be able to come up with a semi-automatic mechanism to measure this, but I fear without deliberate and consistent human intervention, the data could get stale very quickly. Mark.

As Mark says YMMV as different providers will have markedly different conventions, however one additional challenge that will be widespread is that most carriers are not placing their L2/3 hardware in the cable landing stations, preferring instead to extend from the CLS to more centralized POP locations via Layer 1. So what you will see between a city pair like Tokyo-Seattle, which very obviously will require some wet capacity, will actually be some combination of wet and terrestrial. Between the terrestrial extensions and L2/3 overhead it would be difficult to determine exactly what the underlying cable(s) are even if you had a good idea of what the CLS to CLS latency was.

At a previous $dayjob, for example, we had both 100% terrestrial and partially wet links in use to connect our core POPs in Seattle and Vancouver directly. While at the Layer 1 level, the wet link had about a 20% longer optical distance, the distance was short enough that a trace would generally return 3 or 4 ms between core nodes pretty much irrespective of the situation (and the trunks terminated into the same routers in the core anyway, which is a whole other story), so it would have been impossible to tell which path was used even though I knew exactly what the backbone architecture looked like.

Again, YMMV as different providers will have different standards and different city pairs will be easier to determine than others, but there is no “use this one weird trick” rule here.

And with new cables being built (largely by the content folk), CLS-CLS termination is no longer in favour. New cables are now being extended into city data centres as an "informal" standard, because the content folk are not interested in dealing with CLS politics, especially for cables where they may collaborate with regular network operators, to some extent.

Even though the C2C cable stood for "city-to-city", it wasn't a true city to city cable. Some of the most recent cable builds (I'm talking in the last 1.5 - 2 years) have been the ones mandating SLTE's are deployed at carrier-neutral data centres, and not at the CLS. The CLS is just to house the PFE (power feeding equipment).

Mark.