Can P2P applications learn to play fair on networks?

If its not the content, why are network engineers at many university
networks, enterprise networks, public networks concerned about the
impact particular P2P protocols have on network operations? If it was
just a single network, maybe they are evil. But when many different
networks all start responding, then maybe something else is the
problem.

Uhm, what about civil liability? It's not necessarily a technical issue
that motivates them, I think.

If it was civil liability, why are they responding to the protocol being
used instead of the content?

So is Sun RPC. I don't think the original implementation performs
exponential back-off.

If lots of people were still using Sun RPC, causing other subscribers to complain, then I suspect you would see similar attempts to throttle it.

If there is a technical reason, it's mostly that the network as deployed
is not sufficient to meet user demands. Instead of providing more
resources, lack of funds may force some operators to discriminate
against certain traffic classes. In such a scenario, it doesn't even
matter much that the targeted traffic class transports content of
questionable legaility. It's more important that the measures applied
to it have actual impact (Amdahl's law dictates that you target popular
traffic), and that you can get away with it (this is where the legality
comes into play).

Sandvine, packeteer, etc boxes aren't cheap either. The problem is giving
P2P more resources just means P2P consumes more resources, it doesn't solve the problem of sharing those resources with other users. Only if P2P shared network resources with other applications well does increasing network resources make more sense.

If your network cannot handle the traffic, don't offer the services.

It all boils down to the fact that the only thing that end users really have to give us as ISPs, is their source address (which we usually assign to them), the destination address of the packet they want transported, and we can implicitly look at the size of the packet and get that information. That's the ONLY thing they have to give us. Forget looking at L4 or alike, that will be encrypted as soon as ISPs start to discriminate on it. Users have enough computing power available to encrypt everything.

So any device that looks inside packets to decide what to do with them is going to fail in the long run and is thus a stop-gap measure before you can figure out anything better.

Next step for these devices is to start doing statistical analysis of traffic to find patterns, such as "you're sending traffic to hundreds of different IPs simultaneously, you must be filesharing" or alike. This is the next step and a lot of the box manufacturers are already looking into this. So, trench war again, I can see countermeasures to this also.

The long term solution is of course to make sure that you can handle the traffic that the customer wants to send (because that's what they can control), perhaps by charging for it by some scheme that involves not offering flat-fee.

Saying "p2p doesn't play nice with the rest of the network" and blaming p2p, only means you're congesting due to insufficient resources, and the fact that p2p uses a lot of simultaneous TCP sessions and individually they're playing nice, but together they're not when compared to web surfing.

The solution is not to try to change p2p, the solution is to fix the network or the business model so your network is not congesting.

So your recommendation is that universities, enterprises and ISPs simply stop offering all Internet service because a few particular application
protocols are badly behaved?

A better idea might be for the application protocol designers to improve those particular applications. In the mean time, universities, enterprises and ISPs have a lot of other users to serve.

So your recommendation is that universities, enterprises and ISPs simply stop offering all Internet service because a few particular application protocols are badly behaved?

They should stop to offer flat-rate ones anyway. Or do general per-user ratelimiting that is protocol/application agnostic.

There are many ways to solve the problem generally instead of per application, that will also work 10 years from now when the next couple of killer apps have arrived and past away again.

A better idea might be for the application protocol designers to improve those particular applications.

Good luck with that.

So your recommendation is that universities, enterprises and ISPs simply stop offering all Internet service because a few particular application protocols are badly behaved?

They should stop to offer flat-rate ones anyway.

Comcast's management has publically stated anyone who doesn't like the network management controls on its flat rate service can upgrade to Comcat's business class service.

Problem solved?

Or would some P2P folks complain about having to pay more money?

Or do general per-user ratelimiting that is protocol/application agnostic.

As I mentioned previously about the issues involving additional in-line devices and so on in networks, imposing per user network management and billing is a much more complicated task.

If only a few protocol/applications are causing a problem, why do you need an overly complex response? Why not target the few things that are causing problems?

A better idea might be for the application protocol designers to improve those particular applications.

Good luck with that.

It took a while, but it worked with the UDP audio/video protocol folks who used to stress networks. Eventually those protocol designers learned to control their applications and make them play nicely on the network.

Mikael Abrahamsson wrote:

If your network cannot handle the traffic, don't offer the services.

In network access for the masses, downstream bandwidth has always been easier to deliver than upstream. It's been that way since modem manufacturers found they could leverage a single digital/analog conversion in the PTSN to deliver 56kbps downstream data rates over phone lines. This is still true today in nearly every residential access technology: DSL, Cable, Wireless (mobile 3G / EVDO), and Satellite all have asymmetrical upstream/downstream data rates, with downstream being favored in some cases by a ratio of 20:1. Of that group, only DSL doesn't have a common upstream bottleneck between the subscriber and head-end. For each of the other broadband technologies, the overall user experience will continue to diminish as the number of subscribers saturating their upstream network path grows.

Transmission technology issues aside, how do you create enough network capacity for a technology that is designed to use every last bit of transport capacity available? P2P more closely resembles denial of service traffic patterns than "standard" Internet traffic.

The long term solution is of course to make sure that you can handle the traffic that the customer wants to send (because that's what they can control), perhaps by charging for it by some scheme that involves not offering flat-fee.

I agree with the differential billing proposal. There are definitely two sides to the coin when it comes to Internet access available to most of the US; on one side the open and unrestricted access allows for the growth of new ideas and services no matter how unrealistic (ie, unicast IP TV for the masses), but on the other side sets up a "tragedy of the commons" situation where there is no incentive _not_ to abuse the "unlimited" network resources. Even with as insanely cheap as web hosting has become, people are still electing to use P2P for content distribution over $4/mo hosting accounts because it's "cheaper"; the higher network costs in P2P distribution go ignored because the end user never sees them. The problem in converting to a usage-based billing system is that there's a huge potential to simultaneously lose both market share and public perception of your brand. I'm sure every broadband provider would love to go to a system of usage-based billing, but none of them wants to be the first.

-Eric

* Sean Donelan:

If your network cannot handle the traffic, don't offer the services.

So your recommendation is that universities, enterprises and ISPs
simply stop offering all Internet service because a few particular
application protocols are badly behaved?

I think a lot of companies implement OOB controls to curb P2P traffic,
and those controls remain in place even without congestion on the
network. It's like making sure that nobody uses the company plotter to
print posters.

In my experience, a permanently congested network isn't fun to work
with, even if most of the flows are long-living and TCP-compatible. The
lack of proper congestion control is kind of a red herring, IMHO.

Why do you think so many network operators of all types are implementing controls on that traffic?

http://www.azureuswiki.com/index.php/Bad_ISPs

Its not just the greedy commercial ISPs, its also universities, non-profits, government, co-op, etc networks. It doesn't seem to matter if the network has 100Mbps user connections or 128Kbps user connection, they all seem to be having problems with these particular applications.

* Eric Spaeth:

Of that group, only DSL doesn't have a common upstream bottleneck
between the subscriber and head-end.

DSL has got that, too, but it's much more statically allocated and
oversubscription results in different symptoms.

If you've got a cable with 50 wire pairs, and you can run ADSL2+ at 16
Mbps downstream on one pair, you can't expect to get full 800 Mbps
across the whole cable, at least not with run-of-the-mill ADSL2+.
(Actual numbers may be different, but there's a significant problem with
interference when you get closer to theoretical channel limits.)

* Sean Donelan:

If its not the content, why are network engineers at many university
networks, enterprise networks, public networks concerned about the
impact particular P2P protocols have on network operations? If it was
just a single network, maybe they are evil. But when many different
networks all start responding, then maybe something else is the
problem.

Uhm, what about civil liability? It's not necessarily a technical issue
that motivates them, I think.

If it was civil liability, why are they responding to the protocol being
used instead of the content?

Because the protocol is detectable, and correlates (read: is perceived
to correlate) well enough with the content?

If there is a technical reason, it's mostly that the network as deployed
is not sufficient to meet user demands. Instead of providing more
resources, lack of funds may force some operators to discriminate
against certain traffic classes. In such a scenario, it doesn't even
matter much that the targeted traffic class transports content of
questionable legaility. It's more important that the measures applied
to it have actual impact (Amdahl's law dictates that you target popular
traffic), and that you can get away with it (this is where the legality
comes into play).

Sandvine, packeteer, etc boxes aren't cheap either.

But they try to make things better for end users. If your goal is to
save money, you'll use different products (even ngrep-with-tcpkill will
do in some cases).

The problem is giving P2P more resources just means P2P consumes more
resources, it doesn't solve the problem of sharing those resources
with other users.

I don't see the problem. Obviously, there's demand for that kind of
traffic. ISPs should be lucky because they're selling bandwidth, so
it's just more business for them.

I can see two different problems with resource sharing: You've got
congestion not in the access network, but in your core or on some
uplinks. This is just poor capacity planning. Tough luck, you need to
figure that one out or you'll have trouble staying in business (if you
strike the wrong balance, your network will cost much more to maintain
than what the competition pays for therir own, or it will inadequate,
leading to poor service).

The other issue are ridiculously oversubscribed shared media networks on
the last mile. This only works if there's a close-knit user community
that can police themselves. ISPs who are in this situation need to
figure out how they ended up there, especially if there isn't cut-throat
competition. In the end, it's probably a question of how you market
your products ("up to 25 Mbps of bandwidth" and stuff like that).

In my experience, a permanently congested network isn't fun to work
with, even if most of the flows are long-living and TCP-compatible. The
lack of proper congestion control is kind of a red herring, IMHO.

Why do you think so many network operators of all types are
implementing controls on that traffic?

Because their users demand more bandwidth from the network than actually
available, and non-user-specific congestion occurs to a significant
degree. (Is there a better term for that? What I mean is that not just
the private link to the customer is saturated, but something that is not
under his or her direct control, so changing your own behavior doesn't
benefit you instantly; see self-policing above.) Selectively degrading
traffic means that you can still market your service as "unmetered
25 Mbps", instead of "unmetered 1 Mbps".

One reason for degrading P2P traffic I haven't mentioned so far: P2P
applications have got the nice benefit that they are inherently
asynchronous, so cutting the speed to a fraction doesn't fatally impact
users. (In that sense, there isn't strong user demand for additional
network capacity.) But guess what happens if there's finally more
demand for streamed high-entropy content. Then you'll have got not much
choice; you need to build a network with the necessary capacity,

So your recommendation is that universities, enterprises and ISPs simply stop offering all Internet service because a few particular application protocols are badly behaved?

They should stop to offer flat-rate ones anyway.

Comcast's management has publically stated anyone who doesn't like the network management controls on its flat rate service can upgrade to Comcat's business class service.

I have Comcast business service in my office, and residential service at home. I use CentOS for some stuff, and so tried to pull a set of ISOs over BitTorrent. First few came through OK, now I can't get BitTorrent to do much of anything. I made the files I obtained available for others, but noted the streams quickly stop.

This is on my office (business) service, served over cable. It's promised as 6Mbps/768K and costs $100/month. I can (and will) solve this by just setting up a machine in my data center for the purpose of running BT, and shape the traffic so it only gets a couple of Mbps (then pull the files over VPN to my office) But no, their business service is being stomped in the same fashion. So if they did say somewhere (and I haven't seen such a statement) that their business service is not affecteds by their efforts to squash BitTorrent, then it appears they're not being truthful.

Problem solved?

Or would some P2P folks complain about having to pay more money?

Or do general per-user ratelimiting that is protocol/application agnostic.

As I mentioned previously about the issues involving additional in-line devices and so on in networks, imposing per user network management and billing is a much more complicated task.

If only a few protocol/applications are causing a problem, why do you need an overly complex response? Why not target the few things that are causing problems?

Ask the same question about the spam problem. We spend plenty of dollars and manpower to filter out an ever-increasing volume of noise. The actual traffic rate of desired email to and from our customers has not appreciably changed (typical emails per customer per day) in several years.

A better idea might be for the application protocol designers to improve those particular applications.

Good luck with that.

It took a while, but it worked with the UDP audio/video protocol folks who used to stress networks. Eventually those protocol designers learned to control their applications and make them play nicely on the network.

If BitTorrent and similar care to improve their image, they'll need to work with others to ensure they respect networks and don't flatten them. Otherwise, this will become yet another arms race (as if it hasn't already) between ISPs and questionable use.

I have Comcast residential service and I've been pulling down torrents
all weekend (Ubuntu v7.10, etc.), with no problems. I don't think that
Comcast is blocking torrent downloads, I think they are blocking a
zillion Comcast customers from serving torrents to the rest of the
world. It's a network operations thing... why should Comcast provide a
fat pipe for the rest of the world to benefit from? Just my $.02.

-Jim P.

I'm going to call bullshit here.

The problem is that the customers are using too much traffic for what is
provisioned. If those same customers were doing the same amount of traffic
via NNTP, HTTP or FTP downloads then you would still be seeing the same
problem and whining as much [1] .

In this part of the world we learnt (the hard way) that your income has
to match your costs for bandwidth. A percentage [2] of your customers are
*always* going to move as much traffic as they can on a 24x7 basis.

If you are losing money or your network is not up to that then you are
doing something wrong, it is *your fault* for not building your network
and pricing it correctly. Napster was launched 8 years ago so you can't
claim this is a new thing.

So stop whinging about how bitorrent broke your happy Internet, Stop
putting in traffic shaping boxes that break TCP and then complaining
that p2p programmes don't follow the specs and adjust your pricing and
service to match your costs.

[1] See "SSL and ISP traffic shaping?" at http://www.usenet.com/ssl.htm

[2] - That percentage is always at least 10% . If you are launching a new
"flat rate, uncapped" service at a reasonable price it might be closer to
80%.

So which ISPs have contributed towards more intelligent p2p content
routing and distribution; stuff which'd play better with their networks?
Or are you all busy being purely reactive?

Surely one ISP out there has to have investigated ways that p2p could
co-exist with their network..

Adrian

Folks in New Zealand seem to also whine about data caps and "fair usage policies," I doubt changing US pricing and service is going to stop the whining.

Those seem to discourage people from donating their bandwidth for P2P applications.

Are there really only two extremes? Don't use it and abuse it? Will
P2P applications really never learn to play nicely on the network?

Can last-mile providers play nicely with their customers and not continue to offer “Unlimited” (but we really mean only as much as we say, but we’re not going to tell you the limit until you reach it) false advertising? It skews the playing field, as well as ticks off the customer. The P2P applications are already playing nicely. They’re only using the bandwidth that has been allocated to the customer.

-brandon

Here are some more specific questions:

Is some of the difficulty perhaps related to the seemingly unconstrained number of potential distribution points in systems of this type, along with 'fairness' issues in terms of bandwidth consumption of each individual node for upload purposes, and are there programmatic ways of altering this behavior in order to reduce the number, severity, and duration of 'hot-spots' in the physical network topology?

Is there some mechanism by which these applications could potentially leverage some of the CDNs out there today? Have SPs who've deployed P2P-aware content-caching solutions on their own networks observed any benefits for this class of application?

Would it make sense for SPs to determine how many P2P 'heavy-hitters' they could afford to service in a given region of the topology and make a limited number of higher-cost accounts available to those willing to pay for the privilege of participating in these systems? Would moving heavy P2P users over to metered accounts help resolve some of the problems, assuming that even those metered accounts would have some QoS-type constraints in order to ensure they don't consume all available bandwidth?

Nope. Not sure where you got that from. With P2P, it's others outside
the Comcast network that are over saturating the Comcast customers'
bandwidth. It's basically an ebb and flow problem, 'cept there is more
of one than the other. :wink:

Btw, is Comcast in NZ?

-Jim P.

[snip]

So which ISPs have contributed towards more intelligent p2p content
routing and distribution; stuff which'd play better with their networks?
Or are you all busy being purely reactive?

A quick google search found the one I spotted last time I was looking
around Welcome to Hurricane Electric! - HE FAQ
...and last time I talked to any HE folks, they didn't get much uptick
for the service.

There is significant protocol behavior differences between BT and FTP.
Hint - downloads are not the Problem.