ISPs slowing P2P traffic...

http://www.dslreports.com/shownews/TenFold-Jump-In-Encrypted-BitTorrent-Traffic-89260
http://www.dslreports.com/shownews/Comcast-Traffic-Shaping-Impacts-Gnutella-Lotus-Notes-88673
http://www.dslreports.com/shownews/Verizon-Net-Neutrality-iOverblowni-73225

If I am mistakenly being duped by some crazy fascists, please let me know.

However, my question is simply.. for ISPs promising broadband service. Isn't it simpler to just announce a bandwidth quota/cap that your "good" users won't hit and your bad ones will? This chasing of the lump under-the-rug (slowing encrypted traffic, then VPN traffic and so on...) seems like the exact opposite of progress to me (by progressively nastier filters, impeding the traffic your network was built to move, etc).

Especially when there is no real reason this P2P traffic can't masquerade as something really interesting... like Email or Web (https, hello!) or SSH or gamer traffic. I personally expect a day when there is a torrent "encryption" module that converts everything to look like a plain-text email conversation or IRC or whatever.

When you start slowing encrypted or VPN traffic, you start setting yourself up to interfere with all of the bread&butter applications (business, telecommuters, what have you).

I remember Bill Norton's peering forum regarding P2P traffic and how the majority of it is between cable and other broadband providers... Operationally, why not just lash a few additional 10GE cross-connects and let these *paying customers* communicate as they will?

All of these "traffic shaping" and "traffic prioritization" techniques seem a bit like the providers that pushed for ubiquitous broadband because they liked the margins don't want to deal with a world where those users have figured out ways to use these amazing networks to do things... whatever they are. If they want to develop incremental revenue, they should do it by making clear what their caps/usage profiles are and moving ahead... or at least transparently share what shaping they are doing and when.

I don't see how Operators could possibly debug connection/throughput problems when increasingly draconian methods are used to manage traffic flows with seemingly random behaviors. This seems a lot like the evil-transparent caching we were concerned about years ago.

So, to keep this from turning into a holy war, or a non-operational policy debate, and assuming you agree that providers of consumer connectivity shouldn't employee transparent traffic shaping because it screws the savvy customers and business customers. :wink:

What can be done operationally?

For legitimate applications:

Encouraging "encryption" of more protocols is an interesting way to discourage this kind of shaping.

Using IPv6 based IPs instead of ports would also help by obfuscating protocol and behavior. Even IP rotation through /64s (cough 1 IP per half-connection anyone).

For illegitimate applications:

Port knocking and pre-determined stream hopping (send 50Kbytes on this port/ip pairing then jump to the next, etc, etc)

My caffeine hasn't hit, so I can't think of anything else. Is this something the market will address by itself?

DJ

Encouraging "encryption" of more protocols is an interesting way to
discourage this kind of shaping.

Dave Dittrich, on another list yesterday:

They're not the only ones getting ready. There are at least 5 anonymous
P2P file sharing networks that use RSA or Diffie-Hellman key exchange
to seed AES/Rijndael encryption at up to 256 bits. See:

http://www.planetpeer.de/wiki/index.php/Main_Page

You can only filter that which you can see, and there are many ways
to make it hard to see what's going over the wire.

Bottom line - "they" can probably deploy the countermeasures faster than
"we" can deploy the shaping....

Semi-related article:

http://ap.google.com/article/ALeqM5gyYIyHWl3sEg1ZktvVRLdlmQ5hpwD8U1UOFO0

-Matt

Odd, I saw *another* article that said that while the FCC is moving to
investigate unfair behavior by Comcast, Congress is moving to investigate
unfair behavior in the FCC.

http://www.reuters.com/article/industryNews/idUSN0852153620080109

This will probably get.... interesting.

They're not the only ones getting ready. There are at least 5 anonymous
P2P file sharing networks that use RSA or Diffie-Hellman key exchange
to seed AES/Rijndael encryption at up to 256 bits. See:

http://www.planetpeer.de/wiki/index.php/Main_Page

You can only filter that which you can see, and there are many ways
to make it hard to see what's going over the wire.

Bottom line - "they" can probably deploy the countermeasures faster than
"we" can deploy the shaping....

I'm certain of this. First adopters are always ahead of the curve. The question is when a "quality of service" (little Q) -- the purported "improving the surfing experience for the rest of our users" is the stated reason....

They (whatever provider is taking a position) should transparently state their policies and enforcement mechanisms. They shouldn't be selectively prioritizing traffic based on their perception of its purpose. The standard of reasonableness would be where the net functions better... such as dropping ICMPs or attack traffic in favor of traffic with a higher signal-to-noise ratio (e.g. TCP).

As opposed to whose traffic can we drop that is the least likely to result in a complaint or cancellation... The reason I consider this invalid, is because its a kissing-cousin to "whose traffic can we penalize that we can later charge access to as a /premium service/"?

I'm sure I'm preaching to the choir here, but basically if everyone got the 10mb/s service they believe they got when they ordered their connection, there would be no place to pay for "higher priority" service to Youtube or what-have-you -- except when you want more than 10mb/s service.

I think the important trial of DirectTVs VoD service over the Internet is going to be an awesome test case of this in real life. It may save them from me cancelling my DirectTV subscription just to see how Verizon FIOS handle the video streams. :slight_smile:

DJ

This does nothing to affect last-mile costs, and these costs could be the reason that you need to cap at all (certainly this is the case in the UK).

[snip]

However, my question is simply.. for ISPs promising broadband service.
Isn't it simpler to just announce a bandwidth quota/cap that your "good"
users won't hit and your bad ones will?

Simple bandwidth is not the issue. This is about traffic models using
statistical multiplexing making assumption regardin humans at the helmu,
and those models directing the capital investment of facilities and
hardware. You likely will see p2p throttling where you also see
"residential customers must not host servers" policies. Demand curves
for p2p usage do not match any stat-mux models where brodband is sold
for less than it costs to maintain and upgrade the physical plant.

Especially when there is no real reason this P2P traffic can't
masquerade as something really interesting... like Email or Web (https,
hello!) or SSH or gamer traffic. I personally expect a day when there is
a torrent "encryption" module that converts everything to look like a
plain-text email conversation or IRC or whatever.

The "problem" with p2p traffic is how it behaves, which will not be
hidden by ports or encryption. If the *behavior* of the protocol[s]
change such that they no longer look like digital fountains and more
like "email conversation or IRC or whatever", then their impact is
mitigated and they would not *be* a problem to be shaped/throttled/
managed.

[snip]

I remember Bill Norton's peering forum regarding P2P traffic and how the
majority of it is between cable and other broadband providers...
Operationally, why not just lash a few additional 10GE cross-connects
and let these *paying customers* communicate as they will?

Peering happens between broadband companies all the time. That does
not resolve regional, city, or neighborhood congestion in one network.

[snip]

Encouraging "encryption" of more protocols is an interesting way to
discourage this kind of shaping.

This does nothing but reduce the pool of remote-p2p-nodes to those
running encryption-capable clients. This is why people think they
"get away" using encryption, as they are no longer the tallest nail
to be hammered down, and often enough fit within their buckets.

[snip]

My caffeine hasn't hit, so I can't think of anything else. Is this
something the market will address by itself?

Likely. Some networks abandon standards and will tie customers to
gear that looks more like dedicated pipes (narad, etc). Some will
have the 800-lb-gorilla-tude to accelerate vendors' deployment of
docsis3.0. Folks with the apropriate war chests can (and have)
roll out PON and be somewhat generous... of course, the dedicated
and mandatory ONT & CPE looks a lot like voice pre-carterfone...

Joe, not promoting/supporting any position, just trying to provide
    facts about running last-mile networks.

The FCC isn't just a small pool of people, like any gov't agency
there's a *lot* of people behind all this stuff. From public-safety to
calea to broadcast, pstn, etc..

  FCC was quick to step in when some isp was blocking vonage
stuff. This doesn't seem to be as big of an impact IMHO (ie: it won't
obviously block your access to a PSAP/911) but still needs to be addressed.

  We'll see what happens, and how the 160Mb/s DOCSIS 3.0 connections
and infrastructure to support it pan out on the comcast side..

  - Jared

Deepak,

No, it isn't.

The bandwidth cap generally ends up being set at some multiple of the
cost to service the account. Someone running at only half the cap is
already a "bad" user. He's just not bad enough that you're willing to
raise a ruckus about the way he's using his "unlimited" account.

Let me put it to you another way: its the old 80-20 rule. You can
usually select a set of users responsible for 20% of your revenue
which account for 80% of your cost. If you could somehow shed only
that 20% of your customer base without fouling the cost factors you'd
have a slightly smaller but much healthier business.

The purpose of the bandwidth cap isn't to keep usage within a
reasonable cost or convince folks to upgrade their service... Its
purpose is to induce the most costly users to close their account with
you and go spend your competitors' money instead.

'Course, sometimes the competitor figures out a way to service those
customers for less money and the departing folks each take their 20
friends with them. It's a double-edged sword which is why it rarely
targets more than the hogs of the worst 1%.

Regards,
Bill Herrin

Hi all, 1st post for me here, but I just couldn't help it.

We've been noticing this for quite a couple years in France now. (same time Cisco buying PCUBE, anyone remember ?).
What happened is that someday, some major ISP here decided customer were to be offered 24Mb/s DSL DOWN, unlimited, plus TV, plus VoIP towards hundreds of free destinations...
... all that for around 30€/months.

Just make a simple calculation with the amount of bandwidth in terms of transit. Let's say you're a french ISP, transit price-per-meg could vary between 10€ and 20€ (which is already cheap isn't it ?), multiply this by 24Mb/s, now the 30€ that you charge makes you feel like you'd better do everything possible to limitate traffic going towards other ASes.
Certainly sounds like you've screwed your business plan. Let's be honest still, dumping prices on Internet Access also brang the country amongst the leading Internet countries, having a rather positive effect on competition.

Another side of the story is that once upon a time, ISPs had a naturally OUTBOUND traffic profile, which supposedly is was to good in terms of ratio to negociate peerings.
Thanks to peer-to-peer, now their ratios are BALANCED, meaning ISPs are now in a dominant position for negociating peerings.
Eventually the question is: why is it that you guys fight p2p while at the same time benefiting from it, it doesn't quite make sense does it ?

In France, Internet got broken the very 1st day ISPs told people it was cheap. It definitely isn't, but there is no turning back now...

Greg VILLAIN
Independant Network & Telco Architecture Consultant

The vast majority of our last-mile connections are fixed wireless. The design of the system is essentially half-duplex with an adjustable ratio between download/upload traffic. PTP heavily stresses the upload channel and left unchecked results in poor performance for other customers.

Bandwidth quotas don't help much since it just moves the problem to the 'start' of the quota time.

Hard limits on upload bandwidth help considerably but do not solve the problem since only a few dozen customers running a steady 256k upload stream can saturate the channel. We still need a way to shape the upload traffic.

It's easy to say "put up more access points, sectors, etc.) but there are constraints due to RF spectrum, tower space, etc.

Unfortunately there are no easy answers here. The network (at least ours) is designed to provide broadband download speeds to rural customers. It's not designed and is not capable of being a CDN for the rest of the world.

I would be much happier creating a torrent server at the data center level that customers could seed/upload from rather than doing it over the last mile. I don't see this working from a legal standpoint though.

The vast majority of our last-mile connections are fixed wireless. The
design of the system is essentially half-duplex with an adjustable ratio between download/upload traffic.

This in a nutshell is the problem, the ratio between upload and download should be 1:1 and if it were then there would be no problems. Folks need to stop pretending they aren't part of the internet. Setting a ratio where upload:download is not 1:1 makes you a leech. It's a cheat designed to allow technology companies to claim their devices provide more bandwidth than they actually do. Bandwidth is 2 way, you should give as much as you get.

Making the last mile a 18x unbalanced pipe (ie 6mb down and 384K up) is what has created this problem, not file sharing, not running backups, not any of the things that require up speed. For the entire internet up speed must equal down speed or it can't work. You can't leech and expect everyone else to pay for your unbalanced approach.

Geo.

I would be much happier creating a torrent server at the data
center level that customers could seed/upload from rather
than doing it over
the last mile. I don't see this working from a legal
standpoint though.

Seriously, I would discuss this with some lawyers who have
experience in the Internet area before coming to a conclusion
on this. The law is as complex as the Internet itself.

In particular, there is a technical reason for setting up
such torrent seeding servers in a data center and that
technical reason is not that different from setting up
a web-caching server (either in or out) in a data center.
Or setting up a web server for customers in your data center.

As long as you process takedown notices for illegal torrents
in the same way that you process takedown notices for illegal
web content, you may be able to make this work.

Go to Google and read a half-dozen articles about "sideloading"
to compare it to what you want to do. In fact, sideload.com may
have done some of the initial legal legwork for you. It's worth
discussing this with a lawyer to find out the limits in which
you can work and still be legal.

From a technical point of view, if your Bittorrent protocol seeder

does not have a copy of the file on its harddrive, but pulls it
in from the customer's computer, you would only be caching the
file in RAM and there is some legal precedent going back into
the pre-Internet era that exempts such copies from legislation.

--Michael Dillon

Geo. wrote:

The vast majority of our last-mile connections are fixed wireless. The
design of the system is essentially half-duplex with an adjustable ratio between download/upload traffic.

This in a nutshell is the problem, the ratio between upload and download should be 1:1 and if it were then there would be no problems. Folks need to stop pretending they aren't part of the internet. Setting a ratio where upload:download is not 1:1 makes you a leech. It's a cheat designed to allow technology companies to claim their devices provide more bandwidth than they actually do. Bandwidth is 2 way, you should give as much as you get.

Making the last mile a 18x unbalanced pipe (ie 6mb down and 384K up) is what has created this problem, not file sharing, not running backups, not any of the things that require up speed. For the entire internet up speed must equal down speed or it can't work. You can't leech and expect everyone else to pay for your unbalanced approach.

Geo.

Your back to the 'last mile access' problem. Most Cable, DSL, and Wireless is asymmetric and for good reason - making efficient use of limited overall bandwidth and providing customers the high download speeds they demand.

You can posit that the Internet should be symmetric but it will take major financial and engineering investment to change that. Given that there is no incentive for network operators to assist 3rd party CDN's by increasing upload speeds I don't see this happening in the near future. I am not even remotely surprised that network operators would be interested in disrupting this traffic.

Mark

Geo:

That's an over-simplification. Some access technologies have different
modulations for downstream and upstream.
i.e. if a:b and a=b, and c:d and c>d, a+b<c+d.

In other words, you're denying the reality that people download a 3 to 4
times more than they upload and penalizing every in trying to attain a 1:1
ratio.

Frank

That might be your reality.

My reality is that people with 8/1 ADSL download twice as much as they upload, people with 10/10 upload twice as much as they download.

Interesting, because we have a whole college attached of 10/100/1000 users,
and they still have a 3:1 ratio of downloading to uploading. Of course,
that might be because the school is rate-limiting P2P traffic. That further
confirms that P2P, generally illegal in content, is the source of what I
would call disproportionate ratios.

Frank

Mikael Abrahamsson wrote:

In other words, you're denying the reality that people download a 3 to 4 times more than they upload and penalizing every in trying to attain a 1:1 ratio.

That might be your reality.

My reality is that people with 8/1 ADSL download twice as much as they upload, people with 10/10 upload twice as much as they download.

I'm a photographer. When I shoot a large event and have hundreds or thousands of photos to upload to the fulfillment servers, to the event websites, etc. it can take 12 hours or more over my slow ADSL uplink. When my contract is up, I'll be changing to a different service with symmetrical service, faster upload speeds.

The faster-upload service costs more - ISPs charge more for 2 reasons: 1) Because they can (because the market will bear it) and 2) Because the average customer who buys this service uses more bandwidth.

Do you really find it surprising that people who upload a lot of data are the ones who would pay extra for the service plan that includes a faster upload speed? Why "penalize" the customers who pay extra?

I predicted this billing and usage problem back in the early days of DSL. Just as no webhost can afford to give customers "unlimited usage" on their web servers, no ISP can afford to give customers "unlimited usage" on their access plans. You hope that you don't get too many of the users who use your "unlimited" service - but you are afraid to change your service plans to a realistic plan that actually meets customer needs. You are terrified of dropping that term "unlimited" have having your competitors use this against you in advertising. So you try to "limit" the "unlimited" service without having to drop the term "unlimited" from your service plans.

Some features of an ideal internet access service plan for home users include:

1) Reasonable bandwidth usage allotment per month
2) Proactive monitoring and notification from the ISP if the daily usage indicates they will exceed the plan's monthly bandwidth limit
3) A grace period, so the customer can change user behavior or change plans before being hit with an unexpected bill for "excess use".
4) Spam filtering that Just Works.
5) Botnet detection and proactive notifications when botnet activity is detected from end-user computers. Help them keep their computer running without viruses and botnets and they will love you forever!

If you add the value-ads (#4 and 5), customers will gladly accept reasonable bandwidth caps as *part* of the total *service* package you provide.

If all you want is to provide a pipe, no service, whine about those who use "too much" of the "unlimited" service you sell, well then you create an adversarial relationship with your customers (starting with your lie about "unlimited") and it's not surprising that you have problems.

jc

You're not delivering "Full Internet IP connectivity", you're delivering some degraded pseudo-Internet connectivity.

If you take away one of the major reasons for people to upload (ie P2P) then of course they'll use less upstream bw. And what you call disproportionate ratio is just an idea of "users should be consumers" and "we want to make money at both ends by selling download capacity to users and upload capacity to webhosting" instead of the Internet idea that you're fully part of the internet as soon as you're connected to it.

We're delivering full IP connectivity, it's the school that's deciding to
rate-limit based on application type.

Frank