The scale of streaming video on the Internet.

Hidden in the Comcast and Level 3 press release war are some
facinating details about the scale of streaming video.

In http://blog.comcast.com/2010/11/comcasts-letter-to-fcc-on-level-3.html,
Comcast suggest that "demanded 27 to 30 new interconnection ports".

I have to make a few assumptions, all of which I think are quite
reasonable, but I want to lay them out:

- "ports" means 10 Gigabit ports. 1GE's seems too small, 100GE's seems
  too large. I suppose there is a small chance they were thinking OC-48
  (2.5Gbps) ports, but those seem to be falling out of favor for cost.
- They were provisioning for double the anticipated traffic. That is,
  if there was 10G of traffic total they would ask for 20G of ports.
  This both provides room for growth, and the fact that you can't
  perfectly balance traffic over that many ports.
- That substantially all of that new traffic was for Netflix, or more
  accurately "streaming video" from their CDN.

Thus in round numbers they were asking for 300Gbps of additional
capacity across the US, to move around 150Gbps of actual traffic.

But how many video streams is 150Gbps? Google found me this article:
http://blog.streamingmedia.com/the_business_of_online_vi/2009/03/estimates-on-what-it-costs-netflixs-to-stream-movies.html

It suggests that low-def is 2000Kbps, and high def is 3200Kbps. If
we do the math, that suggests the 150Gbps could support 75,000 low
def streams, or 46,875 high def streams. Let me round to 50,000 users,
for some mix of streams.

Comcast has around ~15 million high speed Internet subscribers (based on
year old data, I'm sure it is higher), which means at peak usage around
0.3% of all Comcast high speed users would be watching.

That's an interesting number, but let's run back the other way.
Consider what happens if folks cut the cord, and watch Internet
only TV. I went and found some TV ratings:

http://tvbythenumbers.zap2it.com/2010/11/30/tv-ratings-broadcast-top-25-sunday-night-football-dancing-with-the-stars-finale-two-and-a-half-men-ncis-top-week-10-viewing/73784

Sunday Night Football at the top last week, with 7.1% of US homes
watching. That's over 23 times as many folks watching as the 0.3% in
our previous math! Ok, 23 times 150Gbps.

3.45Tb/s.

Yowzer. That's a lot of data. 345 10GE ports for a SINGLE TV show.

But that's 7.1% of homes, so scale up to 100% of homes and you get
48Tb/sec, that's right 4830 simultaneous 10GE's if all of Comcast's
existing high speed subs dropped cable and watched the same shows over
the Internet.

I think we all know that streaming video is large. Putting the real
numbers to it shows the real engineering challenges on both sides,
generating and sinking the content, and why comapnies are fighting so
much over it.

You are assuming the absence of any of the following optimizations:

1. Multicast
2. Overlay networks using P2P services (get parts of your stream
  from some of your neighbors).

These are not entirely safe assumptions.

Owen

Sunday Night Football at the top last week, with 7.1% of US homes
watching. That's over 23 times as many folks watching as the 0.3% in
our previous math! Ok, 23 times 150Gbps.

3.45Tb/s.

Yowzer. That's a lot of data. 345 10GE ports for a SINGLE TV show.

But that's 7.1% of homes, so scale up to 100% of homes and you get
48Tb/sec, that's right 4830 simultaneous 10GE's if all of Comcast's
existing high speed subs dropped cable and watched the same shows over
the Internet.

I think we all know that streaming video is large. Putting the real
numbers to it shows the real engineering challenges on both sides,
generating and sinking the content, and why comapnies are fighting so
much over it.

Anything that is "live" & likely to be watched by lots of people at the same time like sports can handled via multicast. The IPTV guys have had a number of years to get that work fairly well in telco environments. The content that can't be handled with multicast, like on demand programming, is where you lose your economy of scale.

Multicast is great for simulating old school broadcasting, but I don't
see how it can apply to Netflix/Amazon style demand streaming where
everyone can potentially watch a different stream at different points in
time with different bitrates.

~Seth

It also proves, though I doubt anyone important is listening, *why the
network broadcast architecture is shaped the way it is*, and it implies,
*to* anyone important who is listening, just how bad a fit that is for
a point- or even multi-point server to viewers environment.

Oh: and all the extra servers and switches necessary to set that up?

*Way* more power than the equivalent transmitters and TV sets. Even if
you add in the cable headends, I suspect.

In other news: viewers will tolerate Buffering... to watch last night's
daily show. They will *not* tolerate it while they're waiting to see if
the winning hit in Game 7 is fair or foul -- which means that it will
not be possible to replace that architecture until you can do it at
technical parity... and that's not to mention the emergency communications
uses of "real" broadcasting, which will become untenable if enough
critical mass is drained off of said "real broadcasting" by other
services which are only Good Enough.

The Law of Unexpected Consequences is a *bitch*. Just ask the NCS people;
I'm sure they have some interesting 40,000ft stories to tell about the
changes in the telco networks since 1983.

Cheers,
-- jra

*Way* more power than the equivalent transmitters and TV sets. Even if
you add in the cable headends, I suspect.

Yeah, but...

This is really not comparable.

Transmitters and TV sets require that everyone watch what is being transmitted. People (myself included) don't like, or don't want this method anymore. I want to watch what I want, when I want to.

This is the new age of media. Out with the old.

From: "Leo Bicknell" <bicknell@ufp.org>
[...]
That's an interesting number, but let's run back the other way.
Consider what happens if folks cut the cord, and watch Internet
only TV. I went and found some TV ratings:

http://tvbythenumbers.zap2it.com/2010/11/30/tv-ratings-broadcast-top-25-sunday-night-football-dancing-with-the-stars-finale-two-and-a-half-men-ncis-top-week-10-viewing/73784

Sunday Night Football at the top last week, with 7.1% of US homes
watching. That's over 23 times as many folks watching as the 0.3% in
our previous math! Ok, 23 times 150Gbps.

3.45Tb/s.

Yowzer. That's a lot of data. 345 10GE ports for a SINGLE TV show.

But that's 7.1% of homes, so scale up to 100% of homes and you get
48Tb/sec, that's right 4830 simultaneous 10GE's if all of Comcast's
existing high speed subs dropped cable and watched the same shows over
the Internet.

I think we all know that streaming video is large. Putting the real
numbers to it shows the real engineering challenges on both sides,
generating and sinking the content, and why companies are fighting so
much over it.

It also proves, though I doubt anyone important is listening, *why the
network broadcast architecture is shaped the way it is*, and it implies,
*to* anyone important who is listening, just how bad a fit that is for
a point- or even multi-point server to viewers environment.

Yes and no... The existing system is a multi-point (transmission towers)
to viewers (multicast) environment. No reason that isn't feasible on the
internet as well.

Oh: and all the extra servers and switches necessary to set that up?

For equivalent service (linear programming), no need. For VOD, turns
out to be basically identical anyway.

*Way* more power than the equivalent transmitters and TV sets. Even if
you add in the cable headends, I suspect.

Not if you allow for multicast.

In other news: viewers will tolerate Buffering... to watch last night's
daily show. They will *not* tolerate it while they're waiting to see if
the winning hit in Game 7 is fair or foul -- which means that it will
not be possible to replace that architecture until you can do it at
technical parity... and that's not to mention the emergency communications
uses of "real" broadcasting, which will become untenable if enough
critical mass is drained off of said "real broadcasting" by other
services which are only Good Enough.

Viewers already tolerate a fair amount of buffering for exactly that.
The bleepability delay and other technical requirements, the bouncing
of things off satellites, etc. all create delays in the current system.

If you keep the delay under 5s, most viewers won't actually know
the difference.

As to the emergency broadcast system, yeah, that's going to lose.

However, the reality is that things are changing and people are tending
to move towards wanting VOD based services more than linear programming.

Owen

I do. Let's assume that there is a multicast future where it's being
legitimately used for live television, and whatever else.

The same mcast infrastructure will be utilized by Amazon.com to stream
popular titles (can you say New Releases) onto users' devices. You
may be unicast for the first few minutes of the movie (if you really
want to start watching immediately) and change over to a
multicast-distributed stream once you have "caught up" to an
in-progress stream.

If Netflix had licensing agreements which made it possible for their
users to store movies on their local device, this would work even
better for Netflix, because of the "queue and watch later" nature of
their site and users. I have a couple dozen movies in my instant
queue. It may be weeks before I watch them all. The most popular
movies can be multicast, and my DVR can listen to the stream when it
comes on, store it, and wait for me to watch it.

I am sure Amazon and Netflix have both thought of this already (if
not, they need to hire new people who still remember how pay-per-view
worked on C-band satellite) and are hoping multicast will one-day come
along and massively reduce their bandwidth consumption on the most
popular titles. I am also certain the cable companies have thought of
it, and added it to the long list of reasons they will never offer
Internet multicast, or at least, not until a competitor pops up and
does it in such a way that customers understand it's a feature they
aren't getting.

want? You going to pay for it? then go ahead!

So what's the cost then - if people paid for their bandwidth instead of freeloading
off the asymetric usage patterns? ie when that 0.3% becomes 80%. Anyone analysed
this out yet? I think the cost metrics will indicate that any network with
video is going to have to setup their own distribution and caching POP mesh
(ie a CDN!) to do it anywhere near economically.

Additionally, while you may think you want to watch what you want to watch and
that's it, it seems likely there'll be a limited amount of material available
or the caching metrics go out the window, ie if everyone is watching something
different at any one time.

/kc

This isn't a take it or leave it deal. To start out and branch out, most streaming is VOD, which even within a cable network eats up huge amounts of bandwidth. In the end, it's expected that there will be a mix of multicast and VOD.

Watch the game live multicast. Missed the game? Watch it on demand. As things progress, we'll probably see more edge content delivery systems (like Akamai) to cache/store huge amounts of video for the local populace. It won't be every movie, but it will be the ones which have a high repeat rate to ease traffic off critical infrastructure, saving everyone money, making everyone happy.

What would be really awesome (unless I've missed it) is Internet access to the emergency broadcast system and local weather services; all easily handled with multicast.

Jack

Have you ever actually been involved with really large scale multicast implementations? I take it that's a no.

The -only- way that would work internet wide, and it defeats the purpose, is if your client side created a tunnel back to your multicast source network. Which would mean you're carrying your multicast data over anycast.

If you, the multicast broadcaster, dont have extensive control of the -entire- end to end IP network, it will be significantly broken significant amounts of the time.

...david (former member of a team of engineers who built and maintained a 220,000 seat multicast video network)

Have you ever actually been involved with really large scale multicast
implementations? I take it that's a no.

Nope. I prefer small scale. :slight_smile:

The -only- way that would work internet wide, and it defeats the
purpose, is if your client side created a tunnel back to your multicast
source network. Which would mean you're carrying your multicast data
over anycast.

So we don't use multicast, fallback to unicast deployments on the Internet today for various events/streams?

If you, the multicast broadcaster, dont have extensive control of the
-entire- end to end IP network, it will be significantly broken
significant amounts of the time.

Clients can't fallback to unicast when multicast isn't functional? I'd expect multicast to save some bandwidth, not all of it.

...david (former member of a team of engineers who built and maintained
a 220,000 seat multicast video network)

Cool. I did a 3 seat multicast video network, and honestly am largely ignorant of multicast over the Internet (on my list!) but do listen to people discuss it. :stuck_out_tongue:

Jack

Have you heard of multicast? :slight_smile:

Antonio Querubin
808-545-5282 x3003
e-mail/xmpp: tony@lava.net

Ah, something I know something about for a change. :slight_smile:

In fact, there's some work in progress on this topic, Jack; FEMA is working
on replacing the EAS -- which itself replaced EBS, and earlier, Conelrad --
with a new system called iPAWS: The Integrated Public Alert and Warning
System.

At the moment, they're working on the "replace the EAS backbone" part of it,
which work is about a year behind schedule, and everyone wants an extension,
but there are other useful places to apply some effort. I'm a designer, not
a coder, so I've been piddling around in the part I'm good at; thinking about
design.

Some of the results are here:

http://www.incident.com/cookbook/index.php/Rough_consensus_and_running_code

and

http://www.incident.com/cookbook/index.php/Alerting_And_Readiness_Framework

and I invite off-list email from anyone who has suggestions to toss in the
pot.

Cheers,
-- jra
(I would like to subject-unthread this, but my mailer is too stupid. Sorry)

Which points to the need for service providers to deploy robust multicast routing.

Antonio Querubin
808-545-5282 x3003
e-mail/xmpp: tony@lava.net

NWS transmits their NOAAPORT data as a multicast stream from geostationary satellites. All someone has to do (actually it would make more sense if NOAA/NWS did this themselves and bypass the satellites) is to gateway that stuff onto the Internet MBONE. NOAAPORT already has globally-assigned multicast addresses and port numbers reserved for it.

Antonio Querubin
808-545-5282 x3003
e-mail/xmpp: tony@lava.net

No doubt - it also points to multicast itself needing a bit more sanity and flexibility for implimentation. When you have to tune -every- l3 device along the path for each stream, well....

As Owen pointed out, perhaps carriers will eventually be motivated to make this happen in order to reduce their own bandwidth costs. Eventually.

In the meantime, speaking with my content hat on, we stick with unicast. :slight_smile:

Yes, Tony, but they can't *count the connected users that way*, you see.

For my part, as someone who used to run a small edge network, what I wonder
is this: is there a multicast repeater daemon of some sort, where I can put
it on my edge, and have it catch any source requested by an inside user and
re-multicast it to my LAN, so that my uplink isn't loaded by multiple
connections?

Or do I need to take the Multicast class again? :slight_smile:

Cheers,
-- jra

Yes, Tony, but they can't *count the connected users that way*, you see.

Actually, given content protection, I highly expect any device receiving multicast video to also have a session open to handle various things, possibly even getting keys for decrypting streams. I doubt they want anyone hijacking a video stream. I also expect to see video shifting to region specific commercials. After all, why charge just one person for a commercial timeslot, when you can charge hundreds or thousands, each for their own local audience; more if they want national.

For my part, as someone who used to run a small edge network, what I wonder
is this: is there a multicast repeater daemon of some sort, where I can put
it on my edge, and have it catch any source requested by an inside user and
re-multicast it to my LAN, so that my uplink isn't loaded by multiple
connections?

If it's actual multicast, it should be there already. I've seen a few interesting daemons for taking unicast and splitting it out, though. Buddy had a little perl script setup with replay-tv which allowed a master connection who could control the replay-tv, and then all other connections were view only. Was simple and cute.

Or do I need to take the Multicast class again? :slight_smile:

I sure as hell need to read up again. I keep getting sidetracked with other things. Perhaps after I wrap up the IPv6 rollout, I can get back to Multicast support. I believe most of my NSPs support it, I just never have time to iron out the details to a level I'm comfortable enough to risk my production routers.

Jack

Yes, Tony, but they can't *count the connected users that way*, you see.

There are various ways to do that. Eg. Windows Media Server can log
multicast Windows Media Clients.

For my part, as someone who used to run a small edge network, what I wonder
is this: is there a multicast repeater daemon of some sort, where I can put
it on my edge, and have it catch any source requested by an inside user and
re-multicast it to my LAN, so that my uplink isn't loaded by multiple
connections?

You might want to take a look at AMT:

http://tools.ietf.org/html/draft-ietf-mboned-auto-multicast-10

Antonio Querubin
808-545-5282 x3003
e-mail/xmpp: tony@lava.net