[NANOG] would ip6 help us safeing energy ?

hello

i have a question :

" IF we would use multicast" streaming ONLY, for appropriet
content , would `nt this " decrease " the overall internet traffic ?

Isn´t this an argument for ip6 / greenip6 :wink: aswell ?

just my 2 cents

marc

Some people make more money shipping more bits. They may not have
any motivation or desire to decrease traffic.

Adrian

hello

i have a question :

" IF we would use multicast" streaming ONLY, for appropriet
content , would `nt this " decrease " the overall internet
traffic ?

Isn?t this an argument for ip6 / greenip6 :wink: aswell ?

Some people make more money shipping more bits. They may not have
any motivation or desire to decrease traffic.

hello adrian, yes i know

but i would like to know if there is some material / links, case studys
or papers / statistics around to visualise it, for a presentation
that i am planning todo.

greetings

Marc

" IF we would use multicast" streaming ONLY, for appropriet
content , would `nt this " decrease " the overall internet traffic ?

On one hand, the amount of content that is 'live' or 'continuous' and suitable for multicast streaming isn't s large percentage of overall internet traffic to begin with. So the effect of moving most live content to multicast on the Internet would have little overall effect.

However, for some live content where the audience is either very large or concentrated on various networks, moving to multicast certainly has significant advantages in reducing traffic on the networks closest to the source or where the viewer concentration is high (particularly where the viewer numbers infrequently spikes significantly higher than the average).

But network providers make their money in part by selling bandwidth. The folks who would need to push for multicast are the live/perishable content providers as they're the ones who'd benefit the most. But if bandwidth is cheap they're not really gonna care.

Isn�t this an argument for ip6 / greenip6 :wink: aswell ?

It's an argument for decreasing traffic and improving network efficiency and scalability to handle 'flash crowd events'. IPv6 has nothing to do with it.

Antonio Querubin
whois: AQ7-ARIN

On one hand, the amount of content that is 'live' or 'continuous' and
suitable for multicast streaming isn't s large percentage of overall
internet traffic to begin with. So the effect of moving most live
content to multicast on the Internet would have little overall
effect.

I'm wondering how much content is used TiVo style, not in real time,
but fairly soon thereafter. It might make sense to multicast feeds to
local caches so when people actually want stuff, it doesn't come all
the way across the net.

R's,
John

" IF we would use multicast" streaming ONLY, for appropriet
content , would `nt this " decrease " the overall internet
traffic ?

On one hand, the amount of content that is 'live' or 'continuous'
and suitable for multicast streaming isn't s large percentage of
overall internet traffic to begin with. So the effect of moving
most live content to multicast on the Internet would have little
overall effect.

right, i am aware of that and i was ment as an hypothetically rant :wink:

However, for some live content where the audience is either very
large or concentrated on various networks, moving to multicast
certainly has significant advantages in reducing traffic on the
networks closest to the source or where the viewer concentration is
high (particularly where the viewer numbers infrequently spikes
significantly higher than the average).

i am not a math genious and i am talking about for example serving

10.000 unicast streams and
10.000 multicast streams

would the multicast streams more efficient or lets say , would you
need more machines to server 10.000 unicast streams ?

But network providers make their money in part by selling
bandwidth. The folks who would need to push for multicast are the
live/perishable content providers as they're the ones who'd benefit
the most. But if bandwidth is cheap they're not really gonna care.

well , cheap is relative , i bet its cheap where google hosts the
NOCs , but its not cheap in brasil , argentinia or indonesia.

Isn´t this an argument for ip6 / greenip6 :wink: aswell ?

It's an argument for decreasing traffic and improving network
efficiency and scalability to handle 'flash crowd events'. IPv6 has
nothing to do with it.

thanks for your opinion.

Marc

John Levine wrote:

I'm wondering how much content is used TiVo style, not in real time,
but fairly soon thereafter. It might make sense to multicast feeds to
local caches so when people actually want stuff, it doesn't come all
the way across the net.

I think the good folks at Akamai may have already thought of this. :slight_smile:

teh

http://research.microsoft.com/~ratul/akamai.html

http://www.akamai.com/html/about/management_dl.html

multicast ?

i have another theory , but i dont talk about it :wink:

BUT .....someone mentioned akamai had 13.000 servers, imagine they
just need 100 would this hurt ? :wink:

cheers

Marc

> I'm wondering how much content is used TiVo style, not in
real time,
> but fairly soon thereafter. It might make sense to
multicast feeds to
> local caches so when people actually want stuff, it doesn't
come all
> the way across the net.

I think the good folks at Akamai may have already thought of this. :slight_smile:

Akamai has built a Content Delivery Network (CDN) because they do not
have to rely on any specific ISP or any specific IP network
functionality.
If you go with IP Multicast, or MPLS P2MP(Point to MultiPoint) then you
are limited to only using ISPs who have implemented the right protocols
and who peer using those protocols. P2P is a lot like CDN because it
does not rely on any specific ISP implementation, but as a result of
being 100% free of the ISP, P2P also lacks the knowledge of the network
topology that it needs to be efficient. Of course, a content provider
could leverage P2P by predelivering its content to strategically located
sites in the network, just like they do with a CDN.

IP multicast and P2MP have routing protocols which tell them where to
send content. CDN's are either set up manually or use their own
proprietary
methods to figure out where to send content. P2P currently doesn't care
about topology because it views the net as an amorphous cloud.

NNTP, the historical firehose protocol, just floods it out
to everyone who hasn't seen it yet but actually, the consumers of
an NNTP feed have been set up statically in advance. And this static
setup does include knowledge of ISP's network topology, and knowledge
of the ISP's economic realities. I'd like to see a P2P protocol that
sets up paths dynamically, but allows for inputs as varied as those
old NNTP setups. There was also a time when LAN's had some form of
economic reality configured in, i.e. some users were only allowed
to log into the LAN during certain time periods on certain days.
Is there any ISP that wouldn't want some way to signal P2P clients
how to use spare bandwidth without ruining the network for other
paying customers?

--Michael Dillon

NNTP, the historical firehose protocol, just floods it out
to everyone who hasn't seen it yet but actually, the consumers of
an NNTP feed have been set up statically in advance. And this static
setup does include knowledge of ISP's network topology, and knowledge
of the ISP's economic realities. I'd like to see a P2P protocol that
sets up paths dynamically, but allows for inputs as varied as those
old NNTP setups. There was also a time when LAN's had some form of
economic reality configured in, i.e. some users were only allowed
to log into the LAN during certain time periods on certain days.
Is there any ISP that wouldn't want some way to signal P2P clients
how to use spare bandwidth without ruining the network for other
paying customers?

I think it's safe to assume that isps are steering p2p traffic for the
purposes of adjusting their ratios on peering and transit links...

while it lacks the intentionality of playing with the usenet
spam/warez/porn firehose a little TE to shift it from one exit to
another when you have lots of choices is presumably a useful knob to have.

Layer violations to tell applications that they should care about some
peers in their overlay network vs others seems like something with a lot
of potential uninteded consequences.

For 10000 concurrent unicast streams you'd need not just more servers.
You'd need a significantly different network infrastructure than something
that would have to handle only a single multicast stream.

But supporting multicast isn't without it's own problems either. Even the
destination networks would have to consider implementing IGMP and/or MLD
snooping in their layer 2 devices to obtain maximum benefit from
multicast.

Antonio Querubin
whois: AQ7-ARIN

i am not a math genious and i am talking about for example serving

10.000 unicast streams and
10.000 multicast streams

would the multicast streams more efficient or lets say , would you
need more machines to server 10.000 unicast streams ?

hello all ,

For 10000 concurrent unicast streams you'd need not just more servers.

  thanks for the partizipation on this topic , i was "theoreticly "
speaking and this was actually what i wanted to hear :wink:

You'd need a significantly different network infrastructure than
something that would have to handle only a single multicast stream.
But supporting multicast isn't without it's own problems either.
Even the destination networks would have to consider implementing
IGMP and/or MLD snooping in their layer 2 devices to obtain maximum
benefit from multicast.

i was reading some papers about multicast activity on 9/11 and it was
interesting to read that it just worked even when most
of the "big player " sites went offline, so this gives me another
approach for emergency scenarios.

<http://www.nanog.org/mtg-0110/ppt/eubanks.ppt&gt;

<http://multicast.internet2.edu/workshops/illinois/internet2-multicast-workshop-31-july-2-august-2006-1-overview.ppt
>

Akamai has built a Content Delivery Network (CDN) because they do not
have to rely on any specific ISP or any specific IP network
functionality.
If you go with IP Multicast, or MPLS P2MP(Point to MultiPoint) then
you
are limited to only using ISPs who have implemented the right
protocols
and who peer using those protocols.

so this is similar to a "wallet garden " and not what we really want ,
but i was clear about that this is actually the only idea to implement
a "new" technologie into an existing infrastructure.

regards and sorry for beeing a bit offtopic

Marc

<www.lettv.de>

Marc Manthey wrote:

i am not a math genious and i am talking about for example serving

10.000 unicast streams and
10.000 multicast streams

would the multicast streams more efficient or lets say , would you
need more machines to server 10.000 unicast streams ?

hello all ,

For 10000 concurrent unicast streams you'd need not just more servers.

  thanks for the partizipation on this topic , i was "theoreticly "
speaking and this was actually what i wanted to hear :wink:

Your delivery needs to be sized against demand. 12 years ago when I
started playing around with streaming on a university campus boxes like
the following were science fiction:

As for that matter were n x 10Gb/s ethernet trunks.

To make this scale in either dimension, audience or bandwidth, the
interests of the service providers and the content creators need to be
aligned. Traditionally this has been something of a challenge for
multicast deployments. Not that it hasn't happened but it's not an
automatic win either.

You'd need a significantly different network infrastructure than
something that would have to handle only a single multicast stream.
But supporting multicast isn't without it's own problems either.
Even the destination networks would have to consider implementing
IGMP and/or MLD snooping in their layer 2 devices to obtain maximum
benefit from multicast.

i was reading some papers about multicast activity on 9/11 and it was
interesting to read that it just worked even when most
of the "big player " sites went offline, so this gives me another
approach for emergency scenarios.

The big player new sites were not take offline due to network capacity
issues but rather because their dynamic content delivery platforms
couldn't cope with the flash crowds...

Once they got rid of the dynamically generated content (per viewer page
rendering, advertising) they were back.

<http://www.nanog.org/mtg-0110/ppt/eubanks.ppt&gt;

<http://multicast.internet2.edu/workshops/illinois/internet2-multicast-workshop-31-july-2-august-2006-1-overview.ppt
>

Akamai has built a Content Delivery Network (CDN) because they do not
have to rely on any specific ISP or any specific IP network
functionality.
If you go with IP Multicast, or MPLS P2MP(Point to MultiPoint) then
you
are limited to only using ISPs who have implemented the right
protocols
and who peer using those protocols.

so this is similar to a "wallet garden " and not what we really want ,
but i was clear about that this is actually the only idea to implement
a "new" technologie into an existing infrastructure.

A maturing internet platform my be quite successful at resisting
attempts to change it. It's entirely possible for example that evolving
the mbone would have been more successful than "going native". The mbone
was in many respects a proto p2p overlay just as ip was a overlay on the
circuit-switched pstn.

That's all behind us however, and the approach that we should drop all
the unicast streaming or p2p in favor of multicast transport because
it's greener or lighter weight is just so much tilting at windmills,
something I've done altogether to much of.

Use the tool where it makes sense and can be delivered in a timely fashion.

I became aware of something called espn360 last fall. I just did a
google search so I could provide a URL, but one of the top search
responses was a Aug 9, 2007 posting saying "ESPN360 Dies an
Unneccessary Death: A Lesson in Network Neutrality ..." I don't
think it's dead, though, and maybe if you don't know about it, you
can do your own google search.

I think Disney/ABC thinks they can get individual ISPs to pay them
to carry sports audio/video streams. I suppose that would be yet
another multicast stream method, assuming an ISP location had multiple
customers viewing the same stream.

Are other content providers trying to do something similar? How are
operators dealing with this? What opinions are there in the operator
community?

  Mr. Dale

I'm not sure of the particulars, but Hulu (NBC/Universal and News
Corp) and FanCast (Comcast) seem to have an interesting relationship.
I would love to know more, but i detest reading financials. :wink:

-Jim P.

Dale:

ESPN360 used to be something that internet subscribers paid for themselves,
but now it's something that ISPs (most interesting to those who are also
video providers) can offer.

If you google around you can find a pretty good Wikipedia page on ESPN360.

I looked into this for our operations because we do both (internet and
video). The price was reasonable and you only pay on the number of internet
subs that meet their minimum performance standards. Since 50% of our user
base is at 128/128 kbps, that's a lot of subscribers we didn't need to pay
for. In the end, I didn't get buy-in from the rest of the management team
into adding this. I think they perceived (and probably correctly so) that
too few of our users would actually *use* it. If I could get even 2% of our
customer base seriously interested I think we would move on this.

BTW, there's no multicast (at lease from Disney/ABC directly) involved.
It's just another unicast video stream like YouTube.

Frank

I looked into this for our operations because we do both
(internet and video). The price was reasonable

That's interesting. Under the commercial television broadcast model of
American networks such as ABC, CBS, FOX, NBC, The CW and MyNetworkTV,
affiliates give up portions of their local advertising airtime in
exchange for network programming.

Isn´t this an argument for ip6 / greenip6 :wink: aswell ?

besides the multicast argument, ipv6 and the transition to it
with dual stacks, etc, etc, afaik will require more horsepower
and memory to handle routing info/updates, don't think so
it will reduce energy consumption au contraire.

one place where major improvements can be made is to
increase the efficiency of switched power supplies on servers
and other gear installed in large datacenters.

My .02

besides the multicast argument,

hi Jorge , all

ok, i was talking about a "campus" installation

imagine you want to broadcast a live event so
10.000 unicast streams and 10.000 multicast stream for example.

from what toni replyed , you need less horsepower with the multicast
streams

For 10000 concurrent unicast streams you'd need not just more servers.

but would like to know how this could be calculated.

my 00.2 :wink:

marc

evening all ,

found an related article about the power consumtion saving in ip6.