I would argue that other than sports (and some news) events, there is
pretty much no content that needs to be real time.
I'm not sure I agree. I've noticed that almost any form of live TV, with the exception of news and sports programming, uses the benefit of real time transmission to allow audience interaction. For instance:
- Phone in discussion and quiz shows
- Any show with voting
- Video request shows
Not only does this type of programming require real-time distribution, as these shows are quite often cheaper to produce than pre-recorded entertainment or documentaries they tend to fill a large portion of the schedule. In some cases the show producers share revenue from the phone calls, too. That makes them more attractive to commissioning editors, I suspect.
Not only does this type of programming require real-time
distribution, as these shows are quite often cheaper to produce than
pre-recorded entertainment or documentaries they tend to fill a large
portion of the schedule.
And since there are so many of these reality shows in
existence and the existing broadcast technology seems to
perfectly meet the needs of the show producers,
what is the point of trying to shift these shows
to the Internet?
If it ain't broke, don't fix it!
I do believe that the amount of video content on
the Internet will increase dramatically over the next
few years, just as it has in the past. But I don't
believe that existing video businesses, such as
TV channels, are going to shift to Internet distribution
other than through specialized services.
The real driver behind the future increase in
video on the Internet is the falling cost of video
production and the widespread knowledge of how to
create watchable video. Five years ago in high school,
my son was taking a video production course. Where do
you think YouTube gets their content?
In the past, it was broadband to the home, webcams
and P2P that drove the increase in video content, but
the future is not just more of the same. YouTube has
leveraged the increased level of video production skills
in the population but only in a crude way.
Let's put it this way. How much traffic on the net was
a result of dead-tree newpapers converting to Internet
delivery, and how much was due to the brand-new concept
of blogging?
There are some ISPs in North America who tell me that something like 80% of their traffic *today* is BitTorrent. I don't know how accurate their numbers are, or whether those ISPs form a representative sample, but it certainly seems possible that the traffic exists regardless of the legality of the distribution.
If the traffic is real, and growing, the question is neither academic nor diversionary.
However, if we close our eyes and accept for a minute that P2P video isn't happening, and all growth in video over the Internet will be in real-time streaming, then I think the future looks a lot more scary. When TSN.CA streamed the World Junior Hockey Championship final via Akamai last Friday, there were several ISPs in Toronto who saw their transit traffic *double* during the game.
We have looked at Amazon's S3 solution for storage since it is relatively cheap. But the transit costs from Amazon are quite expensive when it comes to moving media files at a large scale. At $0.20 per GB of data transferred, that would get extremely expensive. At Pando we move roughly 60 TB a day just from our super nodes. Amazon is cheap storage but expensive delivery on a large scale.
Keith O'Neill
Sr. Network Engineer
*Pando Networks*
Those numbers are reasonably accurate for some networks at certain times. There is often a back and forth between BitTorrent and NNTP traffic. Many ISPs regulate BitTorrent traffic for this very reason. Massive increases in this type of traffic would not be looked upon favorably.
If you considered my previous posts, you would know I agree streaming is scary on a large scale, but unicast streaming is what I reference. Multicast streaming is the real solution. Ultimately, a global multicast network is the only way to deliver these services to a large market.
Gian Anthony Constantine
Senior Network Design Engineer
Earthlink, Inc.
Office: 404-748-6207
Cell: 404-808-4651
Internal Ext: x22007 constantinegi@corp.earthlink.net
Those numbers are reasonably accurate for some networks at certain times. There is often a back and forth between BitTorrent and NNTP traffic. Many ISPs regulate BitTorrent traffic for this very reason. Massive increases in this type of traffic would not be looked upon favorably.
The act of regulating p2p traffic is a bit like playing whack-a-mole. At what point does it cost more to play that game than it costs to build out to carry the traffic?
If you considered my previous posts, you would know I agree streaming is scary on a large scale, but unicast streaming is what I reference. Multicast streaming is the real solution. Ultimately, a global multicast network is the only way to deliver these services to a large market.
The trouble with IP multicast is that it doesn't exist, in a wide-scale, deployed, inter-provider sense.
Which is why ISPs will see all of the above. There will be store-and-forward video, streaming video, on demand video, real-time interactive video, and probably 10 other types I can't think of.
The concern for university or ISP networks isn't that some traffic uses 70% of their network, its that 5% of the users is using 70%, 80%, 90%, 100% of their network regardless of what that traffic is. It isn't "background" traffic using "excess" capacity, it peaks at the same time
as other peak traffic times. P2P congestion isn't constrained to a
single transit bottleneck, it causes bottlenecks in every path local
and transit. Local congestion is often more of a concern than transit.
The big question is whether the 5% of the users will continue to pay
for 5% of the network, or if they use 70% of the network will they pay
for 70% of the network? Will 95% of the users see their prices fall and 5% of the users see their prices rise?
I don't think they have a choice really. The state of the art in
application aware QoS/rate shaping is so behind the times that by the
time it caught, the application would have changed.
You are correct. Today, IP multicast is limited to a few small closed networks. If we ever migrate to IPv6, this would instantly change. One of my previous assertions was the possibility of streaming video as the major motivator of IPv6 migration. Without it, video streaming to a large market, outside of multicasting in a closed network, is not scalable, and therefore, not feasible. Unicast streaming is a short-term bandwidth-hogging solution without a future at high take rates.
Gian Anthony Constantine
Senior Network Design Engineer
Earthlink, Inc.
Office: 404-748-6207
Cell: 404-808-4651
Internal Ext: x22007 constantinegi@corp.earthlink.net
The available address space for multicast in IPv4 is limited. IPv6 vastly expands this space. And here, I may have been guilty of putting the cart before the horse. Inter-AS multicast does not exist today because the motivators are not there. It is absolutely possible, but providers have to want to do it. Consumers need to see some benefit from it. Again, the benefit needs to be seen by a large market. Providers make decisions in the interest of their bottom line. A niche service is not a motivator for inter-AS multicast. If demand for variety in service provider selection grows with the proliferation of IPTV, we may see the required motivation for inter-AS multicast, which places us in a position moving to the large multicast space available in IPv6.
Gian Anthony Constantine
Senior Network Design Engineer
Earthlink, Inc.
This is a little presumptuous on my part, but what other reason would motivate a migration to IPv6. I fail to see us running out of unicast addresses any time soon. I have been hearing IPv6 is coming for many years now. I think video service is really the only motivation for migrating.
I am wrong on plenty of things. This may very well be one of them.
Gian Anthony Constantine
Senior Network Design Engineer
Earthlink, Inc.
I could have said the same thing, but with an opposite end meaning.
You take one 10+ year technology with minimal deployment and put it
on top of another 10+ year technology also far from being widely
deployed and you end up with something quickly approaching zero
deployment, instantly.
In my experience, content providers want to use P2P because it "reduces"
their distribution costs (in quotes, because I'm not convinced it does, in
the real world). Content providers don't care whether access providers like
P2P or not, just whether it works or not.
On one hand, access providers are putting in place rate limiting or blocking
of P2P (subject to discussions of how effective those are), but on the other
hand, content providers are saying that P2P is the future...
Multicast streaming may be a big win when you're only streaming the top
5 or 10 networks (for some value of 5 or 10). What's the performance
characteristics if you have 300K customers, and at any given time, 10%
are watching something from the "long tail" - what's the difference between
handling 30K unicast streams, and 30K multicast streams that each have only
one or at most 2-3 viewers?
1/2, 1/3, etc the bandwidth for each additional viewer of the same stream?
The worst case for a multicast stream is the same as the unicast stream, but the unicast stream is always the worst case.
Multicast doesn't have to be real-time. If you collect interested subscribers over a longer time period, e.g. scheduled downloads over the next hour, day, week, month, you can aggregate more multicast receivers through the same stream. TiVo collects its content using a broadcast
schedule.
A "long tail" distribution includes not only the tail, but also the head. 30K unicast streams may be the same as 30K multicast streams, but
30K multicast streams is a lot better than 300,000 unicast streams.
Although the long tail steams may have 1, 2, 3 receivers of a stream, the Parato curve also has 1, 2, 3 streams with 50K, 25K, 12K receivers.
With Source-Specific Multicast addressing there isn't a shortage of multicast addresses for the typical broadcast usage. At least not until
we also run out of IPv4 unicast addresses.
There is rarely only one way to solve a problem. There will be multiple
ways to distribute data, video, voice, etc.
There you go. SSM would be a great solution. Who the hell supports it, though?
We still get back to the issue of large scale market acceptance. High take rate will be limited to the more popular channels, which are run by large media conglomerates, who are reluctant to let streams out of a closed network.
Gian Anthony Constantine
Senior Network Design Engineer
Earthlink, Inc.
Office: 404-748-6207
Cell: 404-808-4651
Internal Ext: x22007 constantinegi@corp.earthlink.net