BitTorrent swarms have a deadly bite on broadband nets

I wonder how quickly applications and network gear would implement QoS
support if the major ISPs offered their subscribers two queues: a default
queue, which handled regular internet traffic but squashed P2P, and then a
separate queue that allowed P2P to flow uninhibited for an extra $5/month,
but then ISPs could purchase cheaper bandwidth for that.

But perhaps at the end of the day Andrew O. is right and it's best off to
have a single queue and throw more bandwidth at the problem.

A system that wasn't P2P-centric could be interesting, though making it
P2P-centric would be easier, I'm sure. :wink:

The idea that Internet data flows would ever stop probably doesn't work
out well for the average user.

What about a system that would /guarantee/ a low amount of data on a low
priority queue, but would also provide access to whatever excess capacity
was currently available (if any)?

We've already seen service providers such as Virgin UK implementing things
which essentially try to do this, where during primetime they'll limit the
largest consumers of bandwidth for 4 hours. The method is completely
different, but the end result looks somewhat similar. The recent
discussion of AU service providers also talks about providing a baseline
service once you've exceeded your quota, which is a simplified version of
this.

Would it be better for networks to focus on separating data classes and
providing a product that's actually capable of quality-of-service style
attributes?

Would it be beneficial to be able to do this on an end-to-end basis (which
implies being able to QoS across ASN's)?

The real problem with the "throw more bandwidth" solution is that at some
point, you simply cannot do it, since the available capacity on your last
mile simply isn't sufficient for the numbers you're selling, even if you
are able to buy cheaper upstream bandwidth for it.

Perhaps that's just an argument to fix the last mile.

... JG

How about a system where I tell my customers that for a given plan X at price Y they get U bytes of “high priority” upload per month (or day or whatever) and after that all their traffic is low priority until the next cycle starts.

Now here’s the fun part. They can mark the priority on the packets they send (diffserv/TOS) and decide what they want treated as high priority and what they want treated as not-so-high priority.

If I’m a low usage customer with no p2p applications, maybe I can mark ALL my traffic high priority all month long and not run over my limit. If I run p2p, I can choose to set my p2p software to send all it’s traffic marked low priority if I want to, and save my high priority traffic quote for more important stuff.

Maybe the default should be high priority so that customers who do nothing but are light users get the best service.

low priority upstream traffic gets dropped in favor of high priority, but users decide what’s important to them.

If I want all my stuff to be high priority, maybe there’s a metered plan I can sign up for so I don’t have any hard cap on high priority traffic each month but I pay extra over a certain amount.

This seems like it would be reasonable and fair and p2p wouldn’t have to be singled out.

Any thoughts?

That's a fair plan.

Simple me came up with this one,

Don't say you offer 3mb if you only offer 20k.

Simple enough, I think a big problem is that sales is saying they offer
all this bandwidth, but the reality is no one gets it. You can blame P2P
all you want, but realistically if users are offered say 3MB then they
have the right to expect it. Its not their fault or the networks fault
if its not realistic.

You could say that you have no way of knowing how many users are on the
network but that's not true, I bet you could figure out how many users
you can handle at what bandwidth guarantee.

Sorry if this seems simplistic, but hey its fun to make things simple
:slight_smile: even if it can be unrealistic a bit.

The key thing is that it can’t be too complicated for the subscriber. What you’ve described is already too difficult for the masses to consume.

The scavenger class, as has been described in other postings, is probably the simplest way to implement things. Let the application developers take care of the traffic marking and expose priorities in the GUI, and the marketing from the MSO needs to be “$xx.xx per month for general use internet, with unlimited bulk traffic for $y.yy”. Of course, the MSOs wouldn’t say that the first category excludes bulk traffic, or mention caps or upstream limitations or P2P control because that would be bad for marketing.

Frank

The vast bulk of users have no idea how many bytes they consume each month or the bytes generated by different applications. The schemes being advocated in this discussion require that the end users be Layer 3 engineers.

That might dramatically shrink you ‘addressable market’, not to mention your job market …

:slight_smile:

Roderick S. Beck
Director of EMEA Sales
Hibernia Atlantic
1, Passage du Chantier, 75012 Paris
http://www.hiberniaatlantic.com
Wireless: 1-212-444-8829.
Landline: 33-1-4346-3209.
French Wireless: 33-6-14-33-48-97.
AOL Messenger: GlobalBandwidth
rod.beck@hiberniaatlantic.com
rodbeck@erols.com
``Unthinking respect for authority is the greatest enemy of truth.’’ Albert Einstein.

You'd be surprised; users in the Australian market have had to get
used to knowing how much bandwidth they use.

People are adaptable. Get used to it. :slight_smile:

Adrian

That misses the point. They are probably being forced to adapt by a monopoly or a quasi-monopoly or by the fact that transport into Australia is extremely expensive. The situation outside of Australia is quite different. A DS3 from Sydney to LA is worth about 10 DS3s NYC/London.

It is not impossible to move people to these price schemes, but in a market with many providers, it is highly risky.

A simpler and hence less costly approach for those providers serving mass markets is to stick to flat rate pricing and outlaw high-bandwidth applications that are used by only a small number of end users.

Roderick S. Beck
Director of EMEA Sales
Hibernia Atlantic
1, Passage du Chantier, 75012 Paris
http://www.hiberniaatlantic.com
Wireless: 1-212-444-8829.
Landline: 33-1-4346-3209.
French Wireless: 33-6-14-33-48-97.
AOL Messenger: GlobalBandwidth
rod.beck@hiberniaatlantic.com
rodbeck@erols.com
``Unthinking respect for authority is the greatest enemy of truth.’’ Albert Einstein.

That misses the point. They are probably being forced to adapt by a monopoly or a quasi-monopoly or by the fact that transport into Australia is extremely expensive. The situation outside of Australia is quite different. A DS3 from Sydney to LA is worth about 10 DS3s NYC/London.

How's that missing the point? The market might not accept it outright but people
can and have adapted in areas where traffic charging and knowing how much you've
downloaded is the norm.

A simpler and hence less costly approach for those providers serving mass markets is to stick to flat rate pricing and outlaw high-bandwidth applications that are used by only a small number of end users.

.. until someone builds a better network, and then they're stuck? Oh wait thats
right, America also has monopolised last-mile delivery networks which are
coincidentally the ones having the trouble?

Hm!

Adrian

people manage to count stuff they use when they pay for it. minutes(cell), kwh(electricity), gallons(gas), etc.

people have managed to figure out cell phone plans where they get N minutes included and then pay extra over that.

the only users this would affect are those that upload a lot, because noone else should run over their “premium upload limit” and have their upload traffic reclassified as not-high priority.

if bytes are too tiny, maybe count it in tunes, or cds, or web pages(the mythical average web page:) ) or bananas, whatever the marketing folks can live with. call it all a free extra premium service so noone feels bad :slight_smile:

the main idea is that everyone on plan X gets premium service on their first Y bytes/month of upload by default, but if they know more then they can mark some traffic so it doesn’t use up their premium quota but gets worse service. if they do nothing, then all their upload is premium until they run out of premium, which the median user never should.

Likewise, people seem to complain about anything. Even Australians seem to like to complain. Get used to it :slight_smile:

http://www.computerworld.com.au/index.php/id;1929779828

Again, is there no alternative between such extremely low data caps on everyone and extreme usage by a a few?

Note that in many/most cases, the person signing the agreement and paying
the bill (the parental units) are not the ones actually consuming the
bandwidth (the offspring). The *consumer* of the bandwidth may very well
have a *very* good idea of exactly how many movies/albums they've pulled
down this month, and would much prefer if the bill-payer was totally in the
dark about it....

Users more or less know what a gigabyte is, because when they download too many of them, it fills up their drive. If the limits are high enough that only actively using high-bandwidth apps has any danger of going over them, the people using those apps will find the time to educate themselves. It's not that hard: an hour of video conferencing (500 kbps) is 450 MB, downloading a gigabyte is.. 1 GB.

The problem isn't a particular type of traffic in isolation, its usually the impact of one network user's traffic on all the other network user's traffic sharing the same network.

Network Quotas for Individuals - A better answer to the P2P bandwidth problem?
http://www.greatplains.net/research/workshops/2004%20Annual%20Meeting/Network.Quotas2.ppt

Can ISPs and P2P users co-operate for improved performance:
http://www.net.t-labs.tu-berlin.de/papers/AFS-CISPP2PSCIP-07.pdf

P4P: Proactive Provider Assistance for P2P
http://cs-www.cs.yale.edu/homes/yong/publications/P4PVision_P4PWG.ppt

This link has a newer version of the presentation slides.

http://www.greatplains.net/conference/Network-Quotas.ppt

   We have since increased the Internet 1 bandwidth purchased and have increased the Residence Hall Quotas to 1 GigaByte per day.

Sure, I'll sell you a 1:1 pipe that you can use 100%. AUD $400 a megabit.
No worries. :slight_smile:

Adrian

The vast bulk of users have no idea how many bytes they
consume each month or the bytes generated by different
applications. The schemes being advocated in this discussion
require that the end users be Layer 3 engineers.

Actually, it sounds a lot like the Electric7 tariffs found in the UK for
electricity. These are typically used by low income people who have less
education than the average population. And yet they can understand the
concept of saving money by using more electricity at night.

I really think that a two-tiered QOS system such as the scavenger
suggestion is workable if the applications can do the marking. Has
anyone done any testing to see if DSCP bits are able to travel unscathed
through the public Internet?

--Michael Dillon

P.S. it would be nice to see QoS be recognized as a mechanism for
providing a degraded quality of service instead of all the "first class"
marketing puffery.

Given the bad track record for PMTU 'frag needed' ICMP, ECN, and anybody
in 69/8, 70/8, 71/8, I'll make the prediction that DSCP bits are mangled in
too many ways to make effective use of them, and we can expect a 3-4 year
effort to get stuff cleaned up before it works as intended.

Iljitsch van Beijnum wrote:

The vast bulk of users have no idea how many bytes they consume each
month or the bytes generated by different applications. The schemes
being advocated in this discussion require that the end users be
Layer 3 engineers.

Users more or less know what a gigabyte is, because when they download
too many of them, it fills up their drive. If the limits are high
enough that only actively using high-bandwidth apps has any danger of
going over them, the people using those apps will find the time to
educate themselves. It's not that hard: an hour of video conferencing
(500 kbps) is 450 MB, downloading a gigabyte is.. 1 GB.

But then that same 1GB can be sent back up to P2P clients any multiple
of times. When this happens the customer no longer has any idea how much
data they transferred because "well I just left it on and.....".

Really, it shouldn't matter how much traffic a user generates/downloads
so long as QoS makes sure that people who want real stuff get it and are
not killed by the guy down the street seeding the latest Harry Potter
movie. If people are worried about transit and infrastructure costs then
again, implement QoS and fix the transit/infrastructure to use it.

That way you can limit your spending on transit for example to a fixed
amount and QoS will manage it for you.

That's not going to work in the long run. Just my podcasts are about 10 GB a month. You only have to wait until there's more HD video available online and it gets easier to get at for most people to see bandwidth use per customer skyrocket.

There are much worse things than having customers that like using your service as much as they can.

Sure, Apple has. I don't think they intended to, though.

http://www.mvldesign.com/video_conference_tutorial.html

Search for "DSCP" or "Comcast" on that page.