Sunday traffic curiosity

Fellow NANOGers,

Not a big deal by any means, but for those of you who have traffic data, I’m curious what Sunday morning looked like as compared to other Sundays. Sure, Netflix and similar companies have no doubt seen traffic increase, but I’m wondering if an influx of church service streaming was substantial enough to cause a noticeable traffic increase.

We livestream our services and have been for about a year or so, but normally average just a handful of viewers. Today, we were around 150 watching live.

Maybe it’s time to revisit inter-domain multicast?

Owen

Uhmm... no thank you. :slight_smile:

John

We are still far away from apocalypse to realistically think about
inter-domain multicast.
And even if we were ..

As someone who 1) wasn't around during the last Internet scale foray into multicast and 2) working with multicast in a closed environment, I'm curios:

What was wrong with Internet scale multicast? Why did it get abandoned?

It is flow based routing, we do not have a solution to store and
lookup large amount of flows.

We didn't really see a noticeable inbound or outbound traffic change.

But we also streamed and had 80+ people watching online, so there was absolutely a traffic shift.

Still, Sunday Mornings are low traffic periods normally anyway, so the overall traffic "dent" was minimal.

Our Sunday morning, today, was not our highest peak since the 17th. Our
highest peak since the 17th was yesterday (Saturday morning), at around
0900hrs UTC.

Peak increase since the 17th went to 15%. Saturday morning was at 17.5%.

Mark.

There are about 20 years of archives to weed through, and some of our
friends are still trying to make this happen. I expect someone (Hi
Lenny) to appear any moment and mention AMT. So my take isn't
universally accepted, but it won't be too far from what you'll hear
from many. Brief summary off the top of my head:

1. Complexity. Both in protocol mechanisms and the requirements in
network devices (i.e. snooping, state, troubleshooting).

2. Security. Driven in part by #1, threats abound. SSM can eliminate
some of this and you can design a receiver-only model that removes most
of the remaining problems - congratulations you just reinvented over
the air broadcast TV. Even if you don't do interdomain IP multicast,
you still may be at risk:

  <https://ccronline.sigcomm.org/wp-content/uploads/2017/01/p27-sargent.pdf>

3. Need and business drivers. Still far from compelling to build and
support all this to make it worthwhile for all but a few niche
environments.

Support and expertise in this area is also very thin. Your inquiry
demonstrates this. I stopped teaching it to students. What remains is
becoming even less well supported than it has been. There is almost no
interdomain IP multicast monitoring being done anymore. There is scant
actual content being delivered, all the once popular stuff is gone.
The number of engineers who know this stuff are dwindling and some that
do know something about it are removing at least some parts of it:

  <https://tools.ietf.org/html/draft-ietf-mboned-deprecate-interdomain-asm-07>

John

there wasn't any problem with inter-domain multicast that couldn't be resolved by handing over to level 3 engineering and the vendor's support escalation team.

But then again, there weren't many problems with inter-domain multicast that could be resolved without handing over to level 3 engineering and the vendor's support escalation team.

Nick

For my part I speculate multicast did not take off at any level (inter domain, intra domain) because pipes grew larger (more bandwidth) faster than uses ever needed. Even now, I dont hear problems of bandwidth from some end users, like friends using netflix. I do hear in media that there _might_ be an issue of capacity, but I did not hear that from end users.

On another hand, link-local multicast does seem to work ok, at least with IPv6. The problem it solves there is not related to the width of the pipe, but more to resistance against 'storms' that were witnessed during ARP storms. I could guess that Ethernet pipes are now so large they could accomodate many forms of ARP storms, but for one reason or another IPv6 ND has multicast and no broadcast. It might even be a problem in the name, in that it is named 'IPv6 multicast ND' but underlying is often implemented with pure broadcast and local filters.

If the capacity is reached and if end users need more, then there are two alternative solutions: grow capacity unicast (e.g. 1Tb/s Ethernet) or multicast; it's useless to do both. If we cant do 1 Tb/s Ethernet ('apocalypse' was called by some?) then we'll do multicast.

I think,

Alex, LF/HF 3

This is a case where the cure is far worse than the poison. People do
not run IPv6 ND like this, because you can't scale it. It would be
trivial for anyone in the LAN to exhaust multicast states on the L2
switch. It is entirely uneconomical to build L2 switch which could
support all the mcast groups ND could need. So those do not exist
today, defensive configuration floods the ND frames, just the same as
ARP.

You also cannot scale interdomain multicast (bier is trying to solve
this), because every flow S,G needs to be programmed in HW with list
of egress entries, this is very expensive to store and very expensive
to look, it is flow routing. Today already lookup speeds are not
limited by silicon but by memory access, and the scale of the problem
is much much smaller (and bound) in ucast.

most of the challenges, in particular incentive aspects, have been
nicely discussed in "Deployment issues for the IP multicast service and
architecture," IEEE Network 2000:
https://www.cl.cam.ac.uk/teaching/1314/R02/papers/multicastdeploymentissues.pdf

Cheers
  matthias

It failed to scale for some of the exact same reasons QoS failed to scale -
what works inside one administrative domain doesn't work once it crosses domain
boundaries.

Plus, there's a lot more state to keep - if you think spanning tree gets ugly
if the tree gets too big, think about what happens when the multicast covers
3,000 people in 117 ASN's, with people from multiple ASN's joining and leaving
every few seconds.

It failed to scale for some of the exact same reasons QoS failed to
scale - what works inside one administrative domain doesn't work once
it crosses domain boundaries.

Plus, there's a lot more state to keep - if you think spanning tree
gets ugly if the tree gets too big, think about what happens when the
multicast covers 3,000 people in 117 ASN's, with people from multiple
ASN's joining and leaving every few seconds.

add to that it is the TV model in a VOD world. works for sports, maybe,
not for netflix

randy

It failed to scale for some of the exact same reasons QoS failed to scale -
what works inside one administrative domain doesn't work once it crosses domain
boundaries.

This, for me, is one of the biggest reasons I feel inter-AS Multicast
does not work. Can you imagine trying to troubleshoot issues between two
or more separate networks?

At $previous_job, we carried and delivered IPTV streams from a head-end
that was under the domain of the broadcasting company. Co-ordination of
feed ingestion, e.t.c. got too complicated that we ended up agreeing to
take full management of the CE router. That isn't something you can
always expect; it worked for us because this was the first time it was
being done in the country.

Plus, there's a lot more state to keep - if you think spanning tree gets ugly
if the tree gets too big, think about what happens when the multicast covers
3,000 people in 117 ASN's, with people from multiple ASN's joining and leaving
every few seconds.

We ran NG-MVPN which created plenty of RSVP-TE state in the core.

The next move to was migrate to mLDP just to simplify state management.
I'm not sure if the company ever did, as I had to leave.

Mark.

Agreed - on-demand is the new economy, and sport is the single thing
still propping up the old economy.

When sport eventually makes into the new world, linear TV would have
lost its last legs.

Mark.

I think that’s the thing:
Drop cache boxes inside eyeball networks; fill the caches during off-peak; unicast from the cache boxes inside the eyeball provider’s network to subscribers. Do a single stream from source to each “replication point” (cache box) rather than a stream per ultimate receiver from the source, then a unicast stream per ultimate receiver from their selected “replication point”. You solve the administrative control problem since the “replication point” is an appliance just getting power & connectivity from the connectivity service provider, with the appliance remaining under the administrative control of the content provider.

It seems to be good enough to support business models pulling in billions of dollars a year.

This does require the consumption of the media to be decoupled from the original distribution of the content to the cache, obviously, hence the live sports mismatch. But it seems this catches enough of the use cases and bandwidth demands, and to have won the “good enough” battle vs. inter-domain multicast.

I would venture there are large percentage increases now in realtime use cases as Zoom & friends take off more, but the bulk of the anecdotal evidence thus far seems to indicate absolute traffic levels to largely be below historical peaks from exceptional events (large international content distribution events).

Hugo,

I can see it now.... Business driver that moved the world towards multicast .... 2020 Coronavirus

Also, I wonder how much money would be lost by big pipe providers with multicast working everywhere

-Aaron