Quality of the internet

Hi,

My 9-5 is working for a VoIP provider. When we started in 2006 we had a lot of issues with the quality of the internet in eastern europe and central Asia. It was not rare for us to have to play around with routing to get the quality that we needed. In a review of tickets for the last two years it seems as if we barely do any of that these days. Rarely do we get a quality complaint that comes back to an issue where a carrier or ISP dropping or mangling packets. Has anyone else observed this as well?

now you mentioned it, verizon fios is having issues in NY ?

Yes. We have gotten a lot fo complaints today. Can’t seem to nail it down. Random PL.

I think all the eyeball networks moving to work with CDNs a bit better helped alleviate the congestion on the transit / peering links. DOCSIS 3.1 helped tremendously with jitter issues as well as fiber xPON being deployed by the telcos.

Transit costs have dropped significantly. So it doesn’t seem like the eyeball networks are running links as hot as they were before. We do still ask our transit account reps or NOCs where they see chronic congestion and usually get a straight forward response.

We have seen blips with customers on smaller rural ISPs in both the US and Canada every now and then but it usually clears up in a day or so which probably means it was backhaul transport issue.

I think, on the whole, as current-production routers have migrated away
from software-based forwarding in recent years into hardware planes, as
more submarine cables have been laid to all continents, as more exchange
points have been built, as mobile networks have moved from being voice
to becoming data transport networks, and as the cloud and content
providers have shifted the local/regional Internet eco system upon their
arrival, it's not unreasonable to conclude that the overall quality of
the Internet has made a marked improvement.

It feels like I operated a satellite-based IP/MPLS network for a whole
country millions of years ago, and yet it was just as recent as 2007.
It's impressive how much we have moved forward, as a community, in that
space of time.

Mark.

I think, on the whole, as current-production routers have migrated away
from software-based forwarding in recent years into hardware planes, as

ACK. Good Internet is almost an emergent feature, not something we
really designed for. The main remaining problems are congested
peerings, which is a silly political problem which ends up hurting
customers and not helping anyone.
No one needs strict priority queues anymore, which was absolutely
needed at one point in time.

We are not in a market which cares about QoS, yet our BE is globally
<200us max jitter on a typical day and AF is <50us. Average jitter
being under 10us. So if I'd have HW timestamping NTP server and
client, I could synchronise clocks over IP transit cross continents to
ten microsecond accuracy. I think this is pretty crazy. And I'm sure
anyone who measures, measures similar numbers, this would have sounded
scifi 20 years ago.
As a context, Zoom recommends a jitter of 40ms or better, or 40000us.

ACK. Good Internet is almost an emergent feature, not something we
really designed for. The main remaining problems are congested
peerings, which is a silly political problem which ends up hurting
customers and not helping anyone.

It's easier to keep selling bandwidth to people than to find another
model from which to make money. That, as network operators, is our fate :-).

No one needs strict priority queues anymore, which was absolutely
needed at one point in time.

We are not in a market which cares about QoS,...

I was just thinking about this 2 years ago, when we sold more and more
IP services than anything else, despite a market with aggressive price
points, e.t.c., to the extent that while we still build all our nodes
with PHB-DSCP/EXP, with all the usual EF, AF, BE queues and 33% policing
on EF queues and LLQ forwarding on EF queues, and all the rest, in
practice, we don't really need them anymore.

We either over-provision capacity to the point where those QoS policies
never kick in, or (and more likely), all traffic is public Internet,
which lives in BE.

Basic policing/shaping/queueing of customer traffic at the edge is
pretty stable; we haven't needed a new QoS feature in that space for
over 8 years. So when vendors are trying to sell new line cards with
enhanced QoS scale, it makes me wonder. Unless it's for BNG deployments
where millions of customers need to be dumped in specific queues, which
isn't for us, and which I doubt many of the up-and-coming mom & pop FTTH
service providers can afford anyway.

yet our BE is globally
<200us max jitter on a typical day and AF is <50us. Average jitter
being under 10us. So if I'd have HW timestamping NTP server and
client, I could synchronise clocks over IP transit cross continents to
ten microsecond accuracy. I think this is pretty crazy. And I'm sure
anyone who measures, measures similar numbers, this would have sounded
scifi 20 years ago.
As a context, Zoom recommends a jitter of 40ms or better, or 40000us.

Which is a good point.

Sitting at my house in Jo'burg, my Zoom calls are typically served out
of some data centre in Paris (161ms from my house) or Amsterdam (176ms
calls is 1ms - 2ms, steady.

The case for EF queues to deliver VoIP calls between a customer and PABX
sitting 1ms apart simply doesn't track anymore. Either the network
already does it due to all the over-engineering, or the traffic goes
over the Public Internet anyway as folk migrate for cost, convenience
and value reasons.

Mark.

What time was that?

                                -Bill

Hi,

in our region (CIS, eastern Europe) we still have issues
with overloaded international transport and bad quality of international channels from time to time (especially at the beginning of COVID19).

While Internet looks slow, but still usable, this case VoIP goes really bad.

Our regional specific is strong and very cheap internal (inside country) connectivity. So one of solution can be join local IXes by dedicated L2 (DWDM) channels.

Ask me off-list if you want some help/solutions :wink:

17.06.20 23:47, Dovid Bender пише:

Somewhere between 2000..2005 I personally still delivered customer
connections that needed that. But we were providing 64kbps still to
some odd locations, like paper mill in the middle of nowhere. I also
needed to do MLPPP over 2*64kbps so that serialising single 1500B
doesn't take too long (PPP could fragment it to two and send parallel,
improving UX).

Back when a 12000 GSR chassis had one line card in slot 0 for the public
Internet, and another in slot 5 for the MPLS backbone. They had to be
that far apart, for safety :-)...

Mark.

VoIP was legalized in South Africa in 2005.

The moment that happened, VoIP operators sprung up, and businesses began
dumping POTS services and moved over to VoIP. In those days, a 64Kbps
leased line was the gold standard; major props if you had anything more
than that; bow-downs if you had 256Kbps or 512Kbps.

We're talking +/- US$1,500/month for a 64Kbps at the time, when the
US$-ZAR exchange rate was 1:6.65.

Customers were willing to pay all that cash back then, because all these
shiny new TDP-based (Tag Distribution Protocol, for the ones who
remember, before it became the LDP standard) MPLS networks were the
guarantors of QoS, and to ensure your VoIP service always received a
steady 16Kbps to deliver two simultaneous phone calls between Jo'burg
and Durban, cash left wallets.

It was still cheaper than paying the telco for an E1. And of course,
there was an eerie eagerness to stick it to the telco :-).

Oh, how far we've come.

Mark.

For safety!

Reminds me of bonding channels in an ISDN line. We had to keep them all apart.

For their own protection.

-Ben