"IP networks will feel traffic pain in 2009" (C|Net & Cisco)

"Cisco VNI projections indicate that IP traffic will increase at a combined
annual growth rate (CAGR) of 46 percent from 2007 to 2012, nearly doubling
every two years. This will result in an annual bandwidth demand on the
world's IP networks of approximately 522 exabytes2, or more than half a
zettabyte."

http://news.cnet.com/8301-13846_3-10145480-62.html

i.e. about the same as it has been. deep shock.

randy

Two thoughts:

Why do some people think that bytes/month is a relevant measure of traffic? Peak bits/second is what you need to make your network handle for it to perform well.

For me CAGR of 46% is a slowdown. I'm used to 75-120% growth per year in traffic, 46% is a relief. As markets mature (we're seeing decline in # of DSL lines in the country, increase is in LAN and mobile) less new people are going online (the ones who want Internet access already have it) and the increase per year in traffic by existing users is slower than the increase seen during the rush of new users coming online.

This will of course vary by where you are in the world...

With no bump when v4 address runout happens.

We'll see.

Matthew Kaufman

It is a slowdown, but the underlying situation is not the same.

100 Mbps came out before most were doing 100 Mbps on a typical LAN in aggregate.
1000 Mbps came out before most were doing 1000 Mbps on a typical WAN in aggregate.
10000 Mbps came out before most were aggregating 10x[GigE|OC12] on their largest individual WAN links.
100000 Mbps should come out shortly after most are aggregating 32x10GE on a typical WAN link.

See a pattern forming here?

Alex H. Ryu wrote:

Probably IP v4 address runout may not affect for traffic amount that much.
Since people will use NATing for saving IP addresses, and IPv6 will be
slowly take some traffic for that matter.

It's more of the cost of bandwidth, and the application people uses.

As I said, "we'll see"... the people using NATing are still going to
have to connect to something, somewhere, to get the data that makes up
this traffic.

The people buying boxes that move traffic around will be spending some
of that money on either v6 capability or NATs or both, too.

Matthew Kaufman

"Cisco VNI projections indicate that IP traffic will increase at a combined
annual growth rate (CAGR) of 46 percent from 2007 to 2012, nearly doubling
every two years. This will result in an annual bandwidth demand on the
world's IP networks of approximately 522 exabytes2, or more than half a
zettabyte."

http://news.cnet.com/8301-13846_3-10145480-62.html

duh...

from a much earlier thread...

> that lesson is, the installed base is meaningless, and how we did it
> before is meaningless, all that matters is getting growth right.
>
> Mike O'dell... Mo's Law. 1994

I believe the quote is What installed base?

/vijay

  to play devils advocate, how much impact does caching have
  on the total traffic flow anyway?

--bill

"Cisco VNI projections indicate that IP traffic will increase at a combined
annual growth rate (CAGR) of 46 percent from 2007 to 2012, nearly doubling
every two years. This will result in an annual bandwidth demand on the
world's IP networks of approximately 522 exabytes2, or more than half a
zettabyte."

http://news.cnet.com/8301-13846_3-10145480-62.html

duh...

from a much earlier thread...

that lesson is, the installed base is meaningless, and how we did it
before is meaningless, all that matters is getting growth right.

      Mike O'dell... Mo's Law. 1994

I believe the quote is What installed base?

/vijay

  to play devils advocate, how much impact does caching have
  on the total traffic flow anyway?

Less and less would be my estimate. How much video is cached ? How much P2P is cached ?

Regards
Marshall

Define "cached".

For instance, most of the video today (which apparently had 12 zeros in the bits per second number) was "cached", if you ask the CDNs serving it.

Sounds to me like that is significant, no matter how big your network is.

If you asked Akamai, Limelight and friends, they might tell you that 100% of important video is cached. And viewed from some angles, every peer who receives a block of data and offers to serve it to others is caching that block of data for the benefit of other peers.

Joe

aha... so taking a peek at my nearby BT tracker & client, it seems
  that there is abt 12% "duplicate" traffic.

  wildextrapolation -- poor caching design/flaky networks have a 10-15%
  extra traffic load, "just to make sure".

  i'd guess that 10% of a femto (or is it the other way) byte of traffic
  relates to real money.

--bill

Hum... whats the wholesale cost of 10G/byte connection?
And what would the cost of a zetabyte connection cost at todays rates?

me thinks Pres Obama's USD 825B package is way too small - or the cost
per G/Byte is going to drop a lot... if the traffic loads keep up.

--bill

If, for example, Google's current generation of YouTube content serving
wasn't 100% uncachable by design, Squid caches would probably be
saving a stupid amount of bandwidth for those of you who are using it.

People rolling Squid + 'magic adrian rules to rewrite Youtube URLs
so they don't suck' report upwards of 80% byte hit rates on -just-
the Youtube content, because people view the same bloody popular
videos over and over again. Thats 80% of a couple hundred megabits
for a couple groups in Brazil, and that translates to mega dollars
to them.

There's no reason to doubt this wouldn't be the case even in Europe
and North America for forward caches put in exactly the right spot
to see exactly the right number of people.

I tried talking to Google about this. Those I spoke to went from
enthusiastic one month to "sorry, been told this won't happen!"
the next month. Which is sad really; the people who keep coming
to me and asking about caching all those things you funny CDNs are
pushing out are those who are on things like satellite links, or
in eastern europe / south america, where the -infrastructure-
is still lacking. They're the ones blocking facebook, youtube,
etc, because of the amount of bandwidth used by just those sites. :slight_smile:

Adrian

(And I know about the various generations of Google content boxes out there
and have heard stories from people who have and are trialling them.
Thats great if you're a service provider, and sucks if you're not well
connected to a service provider. Like, say, schools in Australia trying
to run a class with 30-40 odd computers hitting Google maps at once.
tsk.)

Define "cached".

For instance, most of the video today (which apparently had 12 zeros
in the bits per second number) was "cached", if you ask the CDNs
serving it.

Sounds to me like that is significant, no matter how big your network
is.

If, for example, Google's current generation of YouTube content serving
wasn't 100% uncachable by design, Squid caches would probably be
saving a stupid amount of bandwidth for those of you who are using it.

People rolling Squid + 'magic adrian rules to rewrite Youtube URLs
so they don't suck' report upwards of 80% byte hit rates on -just-
the Youtube content, because people view the same bloody popular
videos over and over again. Thats 80% of a couple hundred megabits
for a couple groups in Brazil, and that translates to mega dollars
to them.

There's no reason to doubt this wouldn't be the case even in Europe
and North America for forward caches put in exactly the right spot
to see exactly the right number of people.

I tried talking to Google about this. Those I spoke to went from
enthusiastic one month to "sorry, been told this won't happen!"
the next month. Which is sad really; the people who keep coming
to me and asking about caching all those things you funny CDNs are
pushing out are those who are on things like satellite links, or
in eastern europe / south america, where the -infrastructure-
is still lacking. They're the ones blocking facebook, youtube,
etc, because of the amount of bandwidth used by just those sites. :slight_smile:

I do not work for GOOG or YouTube, I do not know why they do what they do. However, it is trivial to think up perfectly valid reasons for Google to intentionally break caches on YouTube content (e.g. paid advertising per download).

Doesn't matter if you have small links or no infrastructure or whatever. Google has ever right, moral & legal, to serve content as they please. They are providing the content for free, but they want to do it on their own terms. Seems perfectly reasonable to me. Do you disagree?

Sure the situation sux, but life is not fair.

As for CDNs, most do not do anything to the content they serve. A content provider makes the content and hands it to the CDNs, which serves the content. the CDN does not own, create, or modify the content. (There might be edge cases, but we are talking generalities here.) You see "funny" stuff, talk to the content owner, not the CDN.

(And I know about the various generations of Google content boxes out there
and have heard stories from people who have and are trialling them.
Thats great if you're a service provider, and sucks if you're not well
connected to a service provider. Like, say, schools in Australia trying
to run a class with 30-40 odd computers hitting Google maps at once.
tsk.)

Google is not the only company which will put caches into any provider - or school (which is really just a special case provider) - with enough traffic. A school with 30 machines probably would not qualify. This is not being mean, this is just being rational. No way those 30 machines save the company enough money to pay for the caches.

Again, sux, but that's life. I'd love to hear your solution - besides writing "magic" into squid to intentionally break or alter (some would use much harsher language) content you do not own. Content others are providing for free.

Finding ways to force object revalidation by an intermediary cache (so
the end origin server knows something has been fetched) and thus
allowing the cache to serve the content on behalf of the content
origintor, under their full control, but without the bits being served.

I'm happy to work with content providers if they'd like to point out
which bits of HTTP design and implementation fail them (eg, issues
surrounding Variant object caching and invalidation/revalidation) and
get them fixed in a public manner in Squid so it -can- be deployed
by people to save on bandwidth in places where it still matters.

Adrian

Patrick W. Gilmore wrote:

I do not work for GOOG or YouTube, I do not know why they do what they do. However, it is trivial to think up perfectly valid reasons for Google to intentionally break caches on YouTube content (e.g. paid advertising per download).

Doesn't matter if you have small links or no infrastructure or whatever. Google has ever right, moral & legal, to serve content as they please. They are providing the content for free, but they want to do it on their own terms. Seems perfectly reasonable to me. Do you disagree?

This brings me back the peering problem - if network A's customer sends network B's server a small packet, and network B's server sends back a video, why should Network A be forced to pay the lion's share of the bandwidth costs to deliver Network B's video (and ads) to the viewer? Networks which send large amounts of content should do their best to reduce the bandwidth load on end-user networks whenever and where ever possible, by hot-potato routing, by allowing the content to be cached, etc.

They can't do otherwise and also claim they "do no harm".

Adrian, what did your contacts at Google say when you asked them how this policy was consistent with their Do No Harm motto? If you didn't ask, I suggest you go ask!

jc

policy was consistent with their Do No Harm motto?

Google's motto is Do No Evil, not Do No Harm.

Excellent idea. It is a shame content owners do not see the utility in your idea.

To bring this back to an operational topic, just because a content owner does not want to work with someone on this, does the lack of external bandwidth / infrastructure / whatever make it "OK" to install a proxy which will intentionally re-write the content?

This doesn't provide feed-back to the content distributors on partial downloads, etc - which is useful information to content providers, if you're into data mining end-user browsing habits. In the specific case of Youtube, of course I don't know that they do this, but I'd be surprised if they didn't.

Nick

Excellent idea. It is a shame content owners do not see the utility
in your idea.

To bring this back to an operational topic, just because a content
owner does not want to work with someone on this, does the lack of
external bandwidth / infrastructure / whatever make it "OK" to install
a proxy which will intentionally re-write the content?

This really boils down to "who is more important? The content or the
contents' eyeballs?"

(Or the people having to deliver said content to said eyeballs, and
aren't being paid by the content deliverer on their behalf.)

Adrian