Comcast blocking p2p uploads

Leo Bicknell wrote:
> I'm a bit confused by your statement. Are you saying it's more
> cost effective for ISP's to carry downloads thousands of miles
> across the US before giving them to the end user than it is to allow
> a local end user to "upload" them to other local end users?
  
Not to speak on Joe's behalf, but whether the content comes from
elsewhere on the Internet or within the ISP's own network the issue is
the same: limitations on the transmission medium between the cable modem
and the CMTS/head-end. The issue that cable companies are having with
P2P is that compared to doing a HTTP or FTP fetch of the same content
you will use more network resources, particularly in the upstream
direction where contention is a much bigger issue. On DOCSIS 1.x
systems like Comcast's plant, there's a limitation of ~10mbps of
capacity per upstream channel. You get enough 384 - 768k connected
users all running P2P apps and you're going to start having problems in
a big hurry. It's to remove some of the strain on the upstream channels
that Comcast has started to deploy Sandvine to start closing *outbound*
connections from P2P apps.

That's part of it, certainly. The other problem is that I really doubt
that there's as much favoritism towards "local" clients as Leo seems to
believe. Without that, you're also looking at a transport issue as you
shove packets around. Probably in ways that the network designers did
not anticipate.

Years ago, dealing with web caching services, there was found to be a
benefit, a limited benefit, to setting up caching proxies within a major
regional ISP's network. The theoretical benefit was to reduce the need
for internal backbone and external transit connectivity, while improving
user experience.

The interesting thing is that it wasn't really practical to cache on a
per-POP basis, so it was necessary to pick cache locations at strategic
locations within the network. This meant you wouldn't expect to see a
bandwidth savings on the internal backbone from the POP to the
aggregation point.

The next interesting point is that you could actually improve the cache
hit rate by combining the caches at each aggregation point; the larger
userbase meant that any given bit of content out on the Internet was
more likely to be in cache. However, this had the ability to stress the
network in unexpected ways, as significant cache-site to cache-site data
flows were happening in ways that network engineering hadn't always
anticipated.

A third interesting thing was noted. The Internet grows very fast.
While there's always someone visiting www.cnn.com, as the number of other
sites grew, there was a slow reduction in the overall cache hit rate over
the years as users tended towards more diverse web sites. This is the
result of the ever-growing quantity of information out there on the
Internet.

This doesn't map exactly to the current model with P2P, yet I suspect it
has a number of loose parallels.

Now, I have to believe that it's possible that a few BitTorrent users in
the same city will download the same Linux ISO. For that ISO, and for
any other spectacularly popular download, yes, I would imagine that there
is some minor savings in bandwidth. However, with 10M down and 384K up,
even if you have 10 other users in the city who are all sending at full
384K to someone new, that's not full line speed, so the client will still
try to pull additional capacity from elsewhere to get that full 10M speed.

I've always seen P2P protocols as behaving in an opportunistic manner.
They're looking for who has some free upload capacity and the desired
object. I'm positive that a P2P application can tell that a user in
New York is closer to me (in Milwaukee) than a user in China, but I'd
quite frankly be shocked if it could do a reasonable job of
differentiating between a user in Chicago, Waukesha (few miles away),
or Milwaukee.

In the end, it may actually be easier for an ISP to deal with the
deterministic behaviour of having data from "me" go to the local
upstream transit pipe than it is for my data to be sourced from a
bunch of other random "nearby" on-net sources.

I certainly think that P2P could be a PITA for network engineering.
I simultaneously think that P2P is a fantastic technology from a showing-
off-the-idea-behind-the-Internet viewpoint, and that in the end, the
Internet will need to be able to handle more applications like this, as
we see things like videophones etc. pop up.

... JG

A third interesting thing was noted. The Internet grows very fast.
While there's always someone visiting www.cnn.com, as the number of other
sites grew, there was a slow reduction in the overall cache hit rate over
the years as users tended towards more diverse web sites. This is the
result of the ever-growing quantity of information out there on the
Internet.

Then the content became very large and very static; and site owners
try very hard to maximise their data flows rather than making it easier
for people to cache it locally.

Might work in America and Europe. Developing nations hate it.

I certainly think that P2P could be a PITA for network engineering.
I simultaneously think that P2P is a fantastic technology from a showing-
off-the-idea-behind-the-Internet viewpoint, and that in the end, the
Internet will need to be able to handle more applications like this, as
we see things like videophones etc. pop up.

P2P doesn't have to be a pain in the ass for network engineers. It just
means you have to re-think how you deliver data to your customers.
QoS was a similar headache and people adapted..

(QoS on cable networks? Not possible! anyone remember that?)

Adrian