Hi,
Anyone aware of different traffic behavior depending if the target goes through normal peering than through an exchanges google exists in?
We're facing a weird issue where the same GCLD Instance can upload up to 200Mbps (Ref 1) if the target path goes through, lets say TorIX, but cannot get more than 20Mbps on similar hosts (8 of them) sittings on our peering links.
PS; Those sames hosts get up to their link limit ( 1Gbps ) between each others and others test points we have;
PS: Wireshark capture show nothing abnormal;
PS: Links aren't congested, and so on...
Ref 1 - 200Mbps is on a link rate-limited to 300Mbps. Its my only test point with a TorIX access
Alain,
When you refer to "normal peering" do you mean Internet transit? Or are these PNI's with Google? Do the GCLD instance you reach through "normal peering" have higher latency than through TorIX?
-- Stephen
i also see now that you are a guru rinpoche as wlell
but with valerie home any minute i must stop
will come back though
please accept my apologies this response was totally out of context
Hi,
Yes Stephen, we're talking the usual like GTT...
And no latency wise they're about the same. In the 35ms range.
But I still can't figure out the 10 x drop, that level of latency alone cannot be the factor.
( And Gordy... what?!? )
Hi Alain and all the rest
I het it now
no offense and alain no harm done and all of nanog thank you and i will continue observing as i have since 1995
thank you allagin
PS and BTW i am interested in CLOUD

thanks once more
Just did a quick test from a personal VM, no throughput difference over
direct peering, public IX, or transit. GCP might have a bottleneck in your
case though, might be a good idea to ask them.
Also, I'll have what Gordon is having.
Thanks Tom,
Yeah my test points VM's where all FreeBSD 10.3/11.0, so I decided to randomize some of them and added a CentOS on my Telia peer (10Gbps), and start getting normal performance from GCLD ( In the 500Mbps/500Mbps range )
PS: Even weirderer(tm) The *BSD VMs always performed correctly in the past, but not for this particular case.
I have some work left to normalizing those peers, the same tactic didn't work on the other ones.
if anyone have some lead on a TCP stream analyzer that would cut down on the number of naps I need to take looking for clues in wireshark captures.