* Sean Donelan:
If its not the content, why are network engineers at many university
networks, enterprise networks, public networks concerned about the
impact particular P2P protocols have on network operations? If it was
just a single network, maybe they are evil. But when many different
networks all start responding, then maybe something else is the
problem.
Uhm, what about civil liability? It's not necessarily a technical issue
that motivates them, I think.
If it was civil liability, why are they responding to the protocol being
used instead of the content?
Because the protocol is detectable, and correlates (read: is perceived
to correlate) well enough with the content?
If there is a technical reason, it's mostly that the network as deployed
is not sufficient to meet user demands. Instead of providing more
resources, lack of funds may force some operators to discriminate
against certain traffic classes. In such a scenario, it doesn't even
matter much that the targeted traffic class transports content of
questionable legaility. It's more important that the measures applied
to it have actual impact (Amdahl's law dictates that you target popular
traffic), and that you can get away with it (this is where the legality
comes into play).
Sandvine, packeteer, etc boxes aren't cheap either.
But they try to make things better for end users. If your goal is to
save money, you'll use different products (even ngrep-with-tcpkill will
do in some cases).
The problem is giving P2P more resources just means P2P consumes more
resources, it doesn't solve the problem of sharing those resources
with other users.
I don't see the problem. Obviously, there's demand for that kind of
traffic. ISPs should be lucky because they're selling bandwidth, so
it's just more business for them.
I can see two different problems with resource sharing: You've got
congestion not in the access network, but in your core or on some
uplinks. This is just poor capacity planning. Tough luck, you need to
figure that one out or you'll have trouble staying in business (if you
strike the wrong balance, your network will cost much more to maintain
than what the competition pays for therir own, or it will inadequate,
leading to poor service).
The other issue are ridiculously oversubscribed shared media networks on
the last mile. This only works if there's a close-knit user community
that can police themselves. ISPs who are in this situation need to
figure out how they ended up there, especially if there isn't cut-throat
competition. In the end, it's probably a question of how you market
your products ("up to 25 Mbps of bandwidth" and stuff like that).
In my experience, a permanently congested network isn't fun to work
with, even if most of the flows are long-living and TCP-compatible. The
lack of proper congestion control is kind of a red herring, IMHO.
Why do you think so many network operators of all types are
implementing controls on that traffic?
Because their users demand more bandwidth from the network than actually
available, and non-user-specific congestion occurs to a significant
degree. (Is there a better term for that? What I mean is that not just
the private link to the customer is saturated, but something that is not
under his or her direct control, so changing your own behavior doesn't
benefit you instantly; see self-policing above.) Selectively degrading
traffic means that you can still market your service as "unmetered
25 Mbps", instead of "unmetered 1 Mbps".
One reason for degrading P2P traffic I haven't mentioned so far: P2P
applications have got the nice benefit that they are inherently
asynchronous, so cutting the speed to a fraction doesn't fatally impact
users. (In that sense, there isn't strong user demand for additional
network capacity.) But guess what happens if there's finally more
demand for streamed high-entropy content. Then you'll have got not much
choice; you need to build a network with the necessary capacity,