Comcast blocking p2p uploads

In a message written on Fri, Oct 19, 2007 at 03:21:09PM -0400, Joe Provo wr=
ote:
> Content is irrelevent. BT is a protocol-person's dream and an ISP
> nightmare. The bulk of the slim profit margin exists in taking=20
> advantage of stat-mux oversubscription. BT blows that out of the=20
> water.

I'm a bit confused by your statement. Are you saying it's more
cost effective for ISP's to carry downloads thousands of miles
across the US before giving them to the end user than it is to allow
a local end user to "upload" them to other local end users?

It's quite possible that I've completely missed it, but I hadn't seen many
examples of P2P protocols where any effort was made to locate "local"
users and prefer them. In some cases, this may happen due to the type of
content, but I'd guess it to be rare. Am I missing some new development?

If it isn't being transferred locally, then the ISP is being stuck with
the pain of carrying a download thousands of miles, probably from a
peering (or worse, transit) with another ISP that has also had to carry
it some distance.

... JG

In a message written on Sat, Oct 20, 2007 at 07:12:35PM -0500, Joe Greco wrote:

> In a message written on Fri, Oct 19, 2007 at 03:21:09PM -0400, Joe Provo wr=
> ote:
> > Content is irrelevent. BT is a protocol-person's dream and an ISP
> > nightmare. The bulk of the slim profit margin exists in taking=20
> > advantage of stat-mux oversubscription. BT blows that out of the=20
> > water.
>
> I'm a bit confused by your statement. Are you saying it's more
> cost effective for ISP's to carry downloads thousands of miles
> across the US before giving them to the end user than it is to allow
> a local end user to "upload" them to other local end users?

It's quite possible that I've completely missed it, but I hadn't seen many
examples of P2P protocols where any effort was made to locate "local"
users and prefer them. In some cases, this may happen due to the type of
content, but I'd guess it to be rare. Am I missing some new development?

Most P2P clients favor the "faster" sources. Faster is some sort
of combination of lower latency and/or higher bandwidth. This tends
to favor local clients, however can be quickly skewed by other
factors.

If it isn't being transferred locally, then the ISP is being stuck with
the pain of carrying a download thousands of miles, probably from a
peering (or worse, transit) with another ISP that has also had to carry
it some distance.

But back the the original premise. If say, Linux is being distributed
both from a central web site, and via P2P:

1) Central web site. All but the one ISP with the web site will
   have the traffic going over peering or worse transit, and will
   often be carrying them thousands of miles from the central point.

2) P2P. Has a good chance at least some seeders will be on the same
   network, avoiding peering and transits for some fraction of the
   traffic. Has a good chance the seeders are closer to the user
   than the web site, perhaps even on the same cable segment.

I think the more interesting thing here is overall rate limit.
Let's compare a central web site with a 1Gbps connection for 10,000
downloaders, or a P2P model where there are 10,000 downloaders,
5,000 of which are willing to serve content (obviously starting
with 1-5 seeders, and slowly growing as people download it.

Even if provers only offer 1Mbp/sec of upload, those 5,000 content
providers can put an aggregate 5Gbps into the network, where as the
central server can only put a aggregate 1Gbps into the network.

So, while the bit*mile cost may be lower in the P2P case, the peek
bit rate is higher (which users like, faster downloads); and since
ISP's are forced to size their network for peak rate to insure user
satisfaction the "cost" of P2P is higher, even though the bit mile
cost is lower.

I think. At least, that's my guess from Joe's statement, I'd like him
to elaborate.

I remember from the early days that ISPs meant the web to be
just another kind of tv with you as a consumer and them as
the provider.

They were happy to NAT and to change your IPv4 address so
you could not run servers for ftp or http. Some of them
even hand out rfc 1918 addresses ...

They were worried about VoIP and tried to stop it.

They are worried about everything new.

They think all users are children and try to block everything
that is not meant for the kindergarden.

UUCP is still there. With telefone flatrates I guess some
people have already built there own little internets.

A 14.4 modem can be as fast as 57.2 with big brother listening
at both ends. You need only half the hardware because you never
heard of CALEA and it is a problem of the phone company in the
first place.

I remember companies I worked for, who first moved from netware
to tcp/ip and then even started interconnecting. Only when
universities started connecting to us did we see the internet
and had to renumber of course. I remember how our /etc/hosts
was suddenly growing - no, we did not know DNS but some of
us used IEN116 clients and servers.

Ok - not all of them - only those who see all the money and
dont know how to provide.

The other side of the coin - a lot of people connected to us.
We never asked their names. They connected on weekends or
late at night. They rarely did big downloads, mostly uucp
emails.

And software was free.

Enough ranting.
Cheers
Peter and Karin

In the UK at least, option 1) is financially more favourable for ISPs, since the data flow is
        vendor -> transit -> last mile -> end user,
rather than
        end user -> last mile -> last mile -> end user.

The last mile is where all the costs are.

Andy

Of course, bitstream and l2tp backhaul lend more complexity to the
whole thing; the efficiency maximising behaviour for clients is
exactly the opposite, as p2p traffic between local peers gets both a)
tromboned up to the ISP's pop and back down again, and b) charged for
per bit by BT/whoever. In fact, you want p2p content to come in from
the 'net because it only transits BT's wires once...

I can't think of an obvious way for a p2p client to detect this.

I can't think of an obvious way for a p2p client to detect this.

Work through middleboxes installed in the ISP's network and configured
by the ISP.

--Michael Dillon

Good idea, but there's a trust issue. If I were Comcast I might
configure the box to lie about our backhaul network in order to spork
the p2pers.

.. as compared to whats going on now?

Hm, Azureus seems to have deprecated their JPC support due to "lack of
ISP support." Did anyone try the commercial JPC stuff out? What did you
think? Does anyone still have a copy lying about?

Adrian

More comcast blocking http://kkanarski.blogspot.com/2007/09/comcast-filtering-lotus-notes-update.html

Actually Pando does try to localize traffic as much as possible. Not only that we have started the P4P Working Group.

Keith

Pando Networks

See link below

Joe Greco wrote:

Forgot the links =P

http://www.wired.com/software/webservices/news/2007/08/p2p

http://www.dcia.info/documents/P4P_Overview.pdf

Keith O'Neill wrote: