Fast backbone to NA from Asia

I apologize if this is off-topic, but I am looking to purchase some VPS in Asia on a network that has really fast transit to NA and not affected by the latest peering disputes. The network should have really good connectivity to India / Indonesia / Thailand and ideally Australia as well.

Please reply off-list.

What do you mean by “really fast transit”?
Are you referring to round-trip latency? If so, what sort of latency target are you looking to hit?

Where in North America are you trying to reach, using which providers?
If the networks in North America and Asia are multihomed, that provides some level of protection from peering disputes.

Thank you
jms

Hi Jim,

right now I’m trying to reach 250ms from Singapore to MTL ( 69.90.179.5 ) and back.

These networks can do it for example:

https://lg.sin.psychz.net/
http://network.sg.gs/lg/

However Linode/Digital Ocean both get 330ms from their networks in Singapore. They use Telstra/GTT/Arelion which are very good networks so I don’t understand what the deal is. I used to get 230-250ms to that region from our DC but something changed recently and it’s all over the place.

Thanks!

Could it be related to the fiber cut in the red sea?

Asia-Pac <=> North America is typically faster via the Pacific, not the Indian Ocean.

The Red Sea cuts would impact Asia-Pac <=> Europe traffic.

SMW-5 had an outage on the 19th of April around the Straits of Malacca between Malaysia and Indonesia. The suspected cause is a shunt fault. A shunt fault occurs when the cable insulation is damaged and exposes the electrical wire in the cable, causing a short. This shifts the virtual ground of the electrical circuit toward the shunt fault location.

In many cases, the PFE (Power Feeding Equipment) farthest from the shunt is able to re-balance and pump enough power down the cable from the CLS to maintain the required voltage. However, in some cases - such as the one with this particular SMW-5 fault - the short can become significant enough that there is a total loss of current to drive the segment, leading to an outage.

At this time, repairs are delayed until around end of May. But given the location of the fault, I don't see how it would impact traffic toward North America from Singapore. Impact seems to mainly be between Bangladesh <=> Singapore.

This is Telstra:

1 * *
i-92.sgcn-core01.telstraglobal.net (202.84.219.174) 8 msec
2 i-93.istt04.telstraglobal.net (202.84.224.190) 4 msec
i-92.sgcn-core01.telstraglobal.net (202.84.219.174) [MPLS: Label 24210 Exp 0] 2 msec 2 msec
3 ae10.cr4-sin1.ip4.gtt.net (67.199.139.109) 22 msec
i-91.istt04.telstraglobal.net (202.84.224.197) 3 msec
ae10.cr4-sin1.ip4.gtt.net (67.199.139.109) 3 msec
4 ae16.cr0-mtl1.ip4.gtt.net (89.149.186.134) 239 msec
ae10.cr4-sin1.ip4.gtt.net (67.199.139.109) 4 msec
ae16.cr0-mtl1.ip4.gtt.net (89.149.186.134) 241 msec
5 ae16.cr0-mtl1.ip4.gtt.net (89.149.186.134) 241 msec
ip4.gtt.net (72.29.198.6) 325 msec
ae16.cr0-mtl1.ip4.gtt.net (89.149.186.134) 240 msec
6 10ge.mtl-bvh-xe3-1.peer1.net (216.187.113.107) 316 msec
ip4.gtt.net (72.29.198.6) 330 msec 312 msec
7 managed5.top-consulting.net (69.90.179.5) 329 msec

This is Arelion:

Tracing the route to 69.90.179.5

1 sjo-b23-link.ip.twelve99.net (62.115.141.126) 183 msec
sng-b4-link.ip.twelve99.net (62.115.137.243) 1 msec 10 msec
2 sjo-b23-link.ip.twelve99.net (62.115.136.166) 164 msec 164 msec 163 msec
3 nyk-bb1-link.ip.twelve99.net (62.115.137.168) 250 msec *
motl-b2-link.ip.twelve99.net (62.115.137.143) 256 msec
4 motl-b1-link.ip.twelve99.net (62.115.126.220) 244 msec 245 msec 242 msec
5 aptummanaged-ic-367443.ip.twelve99-cust.net (62.115.174.15) 257 msec 256 msec 263 msec
6 10ge.mtl-bvh-xe3-1.peer1.net (216.187.113.107) 274 msec 268 msec 264 msec
7 managed5.top-consulting.net (69.90.179.5) 258 msec 259 msec 261 msec

This is GTT:

traceroute to 69.90.179.5 (69.90.179.5), 12 hops max, 52 byte packets
1 ae4.lr4-sin1.ip4.gtt.net (89.149.131.98) 1.015 ms 4.696 ms 0.741 ms
MPLS Label=185733 CoS=0 TTL=1 S=1
2 et-0-0-4.lr3-lax2.ip4.gtt.net (89.149.131.233) 162.343 ms 163.306 ms 210.101 ms
MPLS Label=983708 CoS=0 TTL=1 S=1
3 ae0.lr4-lax2.ip4.gtt.net (89.149.184.30) 161.869 ms 163.878 ms 163.078 ms
MPLS Label=417505 CoS=0 TTL=1 S=1
4 ae14.lr7-chi1.ip4.gtt.net (89.149.143.161) 217.467 ms 202.565 ms 203.520 ms
MPLS Label=188676 CoS=0 TTL=1 S=1
5 ae18.lr5-chi1.ip4.gtt.net (89.149.136.81) 206.519 ms 202.270 ms 203.047 ms
MPLS Label=913278 CoS=0 TTL=1 S=1
6 ae19.lr6-chi1.ip4.gtt.net (89.149.141.194) 202.559 ms 202.853 ms 202.225 ms
MPLS Label=422136 CoS=0 TTL=1 S=1
7 ae7.lr3-tor1.ip4.gtt.net (89.149.143.242) 212.461 ms 212.858 ms 230.237 ms
MPLS Label=692593 CoS=0 TTL=1 S=1
8 ae6.cr9-mtl1.ip4.gtt.net (89.149.187.242) 221.465 ms 230.552 ms 218.920 ms
MPLS Label=284571 CoS=0 TTL=1 S=1
9 ae16.cr0-mtl1.ip4.gtt.net (89.149.186.134) 217.961 ms 219.497 ms 219.392 ms
10 ip4.gtt.net (72.29.198.6) 217.164 ms 218.776 ms 217.011 ms
11 10ge.mtl-bvh-xe3-1.peer1.net (216.187.113.107) 220.991 ms 218.415 ms 220.009 ms
12 managed5.top-consulting.net (69.90.179.5) 217.273 ms 217.417 ms 217.079 ms

Looks like Telstra are handing off to GTT. The latency increase at hop 6 suggests asymmetric routing.

Arelion and GTT are within your 250ms spec, although it looks like GTT has more efficient U.S. routing to get to MTL.

I'm not aware of any major subsea cut in Asia-Pac bar the SMW-5 one, so my guess is Linode and Digital Ocean might need to look into their routing with their upstreams out of Singapore.

Mark.

Hello,

Asia-Pac <=> North America is typically faster via the Pacific, not the
Indian Ocean.

The Red Sea cuts would impact Asia-Pac <=> Europe traffic.

Yep, it hurts :frowning:

1. Gi0-3.rtr-01.PAR.witbe.net 0.0% 179 0.3 0.3 0.2 10.4 0.7
2. 193.251.248.21 0.0% 179 3.3 1.3 0.8 19.1 2.1
3. bundle-ether305.partr2.saint-den 6.7% 179 87.1 4.5 1.1 156.7 17.4
4. prs-b1-link.ip.twelve99.net 22.3% 179 9.9 10.4 9.6 48.3 4.4
5. prs-bb2-link.ip.twelve99.net 2.2% 179 10.4 10.3 9.8 27.3 1.3
6. mei-b5-link.ip.twelve99.net 1.1% 178 17.3 18.1 17.2 115.1 7.4
7. prs-bb1-link.ip.twelve99.net 27.5% 178 370.6 365.9 334.7 381.8 8.3
8. ldn-bb1-link.ip.twelve99.net 68.5% 178 366.8 363.0 340.8 379.2 8.3
9. nyk-bb2-link.ip.twelve99.net 11.8% 178 377.6 362.4 322.2 451.0 12.3
10. palo-b24-link.ip.twelve99.net 50.8% 178 359.1 364.1 342.8 397.1 8.6
11. port-b3-link.ip.twelve99.net 0.0% 178 177.4 178.0 177.1 188.6 1.9
12. tky-b3-link.ip.twelve99.net 75.7% 178 355.2 364.0 339.8 377.5 8.0
13. tky-b2-link.ip.twelve99.net 50.0% 178 338.7 350.5 321.8 370.9 11.1
14. sng-b7-link.ip.twelve99.net 87.6% 178 307.8 318.6 306.8 332.0 6.7
15. sng-b5-link.ip.twelve99.net 86.4% 178 314.4 315.3 293.6 330.1 10.2
16. epsilon-ic-382489.ip.twelve99-cu 55.7% 177 364.8 362.9 346.5 391.8 9.1
17. 180.178.74.221 59.9% 177 357.6 366.9 343.4 562.7 25.8
18. swi-01-sin.noc.witbe.net 62.9% 176 374.7 366.4 346.6 381.3 8.3

1299 is now routing Paris to Singapore via US and Pacific...
Not sure if transition 6 to 7 is what was expected, with a 350ms increase...

HE did/does that too, prefering to avoid any direct route from EU to Asia.

Paul

The good news is that the Yemeni government have approved repairs for EIG and SEACOM. The bad news is that those approvals don’t yet extend to AAE-1, whose cut is the one causing you that pain. It’s unclear when, or if, Yemen will give permission to repair AAE-1. The market is speculating mid-June, but there is no hard data to support that. Well, on Arelion’s network, PAO-SIN = 260ms: Tracing the route to 180.178.74.221 1 sjo-b23-link.ip.twelve99.net (62.115.115.217) 2 msec 2 msec 2 msec 2 * tky-b2-link.ip.twelve99.net (62.115.123.141) 187 msec 162 msec 3 * * * 4 * * 62.115.115.62 251 msec 5 * hnk-b3-link.ip.twelve99.net (62.115.143.241) 257 msec * 6 sng-b4-link.ip.twelve99.net (62.115.116.146) 280 msec * 222 msec 7 * * * 8 * * * 9 180.178.74.221 265 msec 262 msec * For the moment, it looks like you’ve switched to Zayo for transit in Paris, so unclear what Arelion’s on network would do PAO-CDG: Tracing the route to 81.88.96.250 1 sjo-b23-link.ip.twelve99.net (62.115.115.217) 2 msec 2 msec 2 msec 2 ae71.zayo.ter1.sjc7.us.zip.zayo.com (64.125.15.150) 2 msec * * 3 * * * 4 * * * 5 * * * 6 * * * 7 * * * 8 * * * 9 ae1.mcs1.cdg12.fr.eth.zayo.com (64.125.29.87) 148 msec * 158 msec 10 v3.ae10.ter3.eqx2.par.as8218.eu (64.125.30.183) 150 msec 151 msec 151 msec 11 ae6.ter4.eqx2.par.core.as8218.eu (83.167.55.43) 151 msec 152 msec 152 msec 12 ae0.ter3.itx5.par.core.as8218.eu (83.167.55.10) 148 msec 148 msec 148 msec 13 witbe-gw1.ter1.itx5.par.cust.as8218.eu (158.255.117.19) 151 msec 153 msec 153 msec 14 Gi0-3.rtr-01.PAR.witbe.net (81.88.96.250) 152 msec * 151 msec Well, right now, of the modern cables that had capacity and reasonable pricing, only SMW-5 remains up… and SMW-5 is just about out of capacity as well. SMW-6 is currently under construction, so that is not yet an option (the Red Sea debacle notwithstanding). Subsea systems that need to cross the Middle East and Egypt to connect Europe and Africa to South (East) Asia are generally problematic because of the complexities of having to deal with Egypt, and now, the Red Sea. That translates into capacity availability (or lack thereof in times like these) and cost. This creates an incentive for operators to route South (East) Asia through the U.S. to get to Europe, until the situation resolves itself, or new cables with new/cheaper capacity pop up. Mark.

Yeah - best thing to do would be to reach out to a problematic provider and ask them for an explanation. Usually, if they have bought directly from a subsea provider, then restoring from a subsea outage may be complex depending on how well they secured themselves both from a diversity and capacity standpoint. If they are a customer of a major transit provider, then their provider’s subsea inventory situation is similar to my point above. It is very hard to tell unless you ask someone on the inside, but as these things do, when cables get cut, latency and packet loss increases are not unexpected, especially for small/local/regional ISP’s that can’t afford to have direct access to multiple subsea systems. And in cases where alternative options are either too costly or non-existent, the latency and packet loss penalty would be sustained longer than necessary. Depending on where you are in the food chain, subsea restoration efforts following a major cut can increase normal pricing by 3X - 5X, particularly if the restoration capacity is taken on a short-term basis, e.g., 3 months. Mark.