BitTorrent swarms have a deadly bite on broadband nets

http://www.multichannel.com/article/CA6332098.html

   The short answer: Badly. Based on the research, conducted by Terry Shaw,
   of CableLabs, and Jim Martin, a computer science professor at Clemson
   University, it only takes about 10 BitTorrent users bartering files on a
   node (of around 500) to double the delays experienced by everybody else.
   Especially if everybody else is using "normal priority" services, like
   e-mail or Web surfing, which is what tech people tend to call
   "best-effort" traffic.

Adding more network bandwidth doesn't improve the network experience of other network users, it just increases the consumption by P2P users. That's why you are seeing many universities and enterprises spending money on traffic shaping equipment instead of more network bandwidth.

Note that this is from 2006. Do you have a link to the actual paper, by
Terry Shaw, of CableLabs, and Jim Martin of Clemson ?

Regards
Marshall

This result is unsurprising and not controversial. TCP achieves
fairness *among flows* because virtually all clients back off in
response to packet drops. BitTorrent, though, uses many flows per
request; furthermore, since its flows are much longer-lived than web or
email, the latter never achieve their full speed even on a per-flow
basis, given TCP's slow-start. The result is fair sharing among
BitTorrent flows, which can only achieve fairness even among BitTorrent
users if they all use the same number of flows per request and have an
even distribution of content that is being uploaded.

It's always good to measure, but the result here is quite intuitive.
It also supports the notion that some form of traffic engineering is
necessary. The particular point at issue in the current Comcast
situation is not that they do traffic engineering but how they do it.

    --Steve Bellovin, http://www.cs.columbia.edu/~smb

Steven M. Bellovin wrote:

This result is unsurprising and not controversial. TCP achieves
fairness *among flows* because virtually all clients back off in
response to packet drops. BitTorrent, though, uses many flows per
request; furthermore, since its flows are much longer-lived than web or
email, the latter never achieve their full speed even on a per-flow
basis, given TCP's slow-start. The result is fair sharing among
BitTorrent flows, which can only achieve fairness even among BitTorrent
users if they all use the same number of flows per request and have an
even distribution of content that is being uploaded.

It's always good to measure, but the result here is quite intuitive.
It also supports the notion that some form of traffic engineering is
necessary. The particular point at issue in the current Comcast
situation is not that they do traffic engineering but how they do it.

Dare I say it, it might be somewhat informative to engage in a priority
queuing exercise like the Internet-2 scavenger service.

In one priority queue goes all the normal traffic and it's allowed to
use up to 100% of link capacity, in the other queue goes the traffic
you'd like to deliver at lower priority, which given an oversubscribed
shared resource on the edge is capped at some percentage of link
capacity beyond which performance begins to noticably suffer... when the
link is under-utilized low priority traffic can use a significant chunk
of it. When high-priority traffic is present it will crowd out the low
priority stuff before the link saturates. Now obviously if high priority
traffic fills up the link then you have a provisioning issue.

I2 characterized this as worst effort service. apps and users could
probably be convinced to set dscp bits themselves in exchange for better
performance of interactive apps and control traffic vs worst effort
services data transfer.

Obviously there's room for a discussion of net-neutrality in here
someplace. However the closer you do this to the cmts the more likely it
is to apply some locally relevant model of fairness.

Steven M. Bellovin wrote:

> This result is unsurprising and not controversial. TCP achieves
> fairness *among flows* because virtually all clients back off in
> response to packet drops. BitTorrent, though, uses many flows per
> request; furthermore, since its flows are much longer-lived than web or
> email, the latter never achieve their full speed even on a per-flow
> basis, given TCP's slow-start. The result is fair sharing among
> BitTorrent flows, which can only achieve fairness even among BitTorrent
> users if they all use the same number of flows per request and have an
> even distribution of content that is being uploaded.
>
> It's always good to measure, but the result here is quite intuitive.
> It also supports the notion that some form of traffic engineering is
> necessary. The particular point at issue in the current Comcast
> situation is not that they do traffic engineering but how they do it.
>

Dare I say it, it might be somewhat informative to engage in a priority
queuing exercise like the Internet-2 scavenger service.

In one priority queue goes all the normal traffic and it's allowed to
use up to 100% of link capacity, in the other queue goes the traffic
you'd like to deliver at lower priority, which given an oversubscribed
shared resource on the edge is capped at some percentage of link
capacity beyond which performance begins to noticably suffer... when the
link is under-utilized low priority traffic can use a significant chunk
of it. When high-priority traffic is present it will crowd out the low
priority stuff before the link saturates. Now obviously if high priority
traffic fills up the link then you have a provisioning issue.

I2 characterized this as worst effort service. apps and users could
probably be convinced to set dscp bits themselves in exchange for better
performance of interactive apps and control traffic vs worst effort
services data transfer.

And if you think about these p2p rate limiting devices a bit more
broadly, all they really are are traffic classification and QoS policy
enforcement devices. If you can set dscp bits with them for certain
applications and switch off the policy enforcement feature ...

I wonder how quickly applications and network gear would implement QoS
support if the major ISPs offered their subscribers two queues: a default
queue, which handled regular internet traffic but squashed P2P, and then a
separate queue that allowed P2P to flow uninhibited for an extra $5/month,
but then ISPs could purchase cheaper bandwidth for that.

But perhaps at the end of the day Andrew O. is right and it's best off to
have a single queue and throw more bandwidth at the problem.

Frank

I wonder how quickly applications and network gear would implement

QoS

support if the major ISPs offered their subscribers two queues: a

default

queue, which handled regular internet traffic but squashed P2P, and

then a

separate queue that allowed P2P to flow uninhibited for an extra

$5/month,

but then ISPs could purchase cheaper bandwidth for that.

But perhaps at the end of the day Andrew O. is right and it's best

off to

have a single queue and throw more bandwidth at the problem.

How does one "squash P2P?" How fast will BitTorrent start hiding it's
trivial to spot ".BitTorrent protocol" banner in the handshakes? How
many P2P protocols are already blocking/shaping evasive?

It seems to me is what hurts the ISPs is the accompanying upload
streams, not the download (or at least the ISP feels the same
download pain no matter what technology their end user uses to get
the data[0]). Throwing more bandwidth does not scale to the number
of users we are talking about. Why not suck up and go with the
economic solution? Seems like the easy thing is for the ISPs to come
clean and admit their "unlimited" service is not and put in upload
caps and charge for overages.

[0] Or is this maybe P2P's fault only in the sense that it makes
so much more content available that there is more for end-users
to download now than ever before.

B¼information contained in this e-mail message is confidential, intended
only for the use of the individual or entity named above. If the reader
of this e-mail is not the intended recipient, or the employee or agent
responsible to deliver it to the intended recipient, you are hereby
notified that any review, dissemination, distribution or copying of this
communication is strictly prohibited. If you have received this e-mail
in error, please contact postmaster@globalstar.com

... Why not suck up and go with the
economic solution? Seems like the easy thing is for the ISPs to come
clean and admit their "unlimited" service is not and put in upload
caps and charge for overages.

Who will be the first? If there *is* competition in the
marketplace, the cable company does not want to be the
first to say "We limit you" (even if it is true, and
has always been true, for some values of truth). This
is not a technical problem (telling of the truth), it
is a marketing issue. In case it has escaped anyone on
this list, I will assert that marketings strengths have
never been telling the truth, the whole truth, and
nothing but the truth. I read the fine print in my
broadband contract. It states that ones mileage (speed)
will vary, and the download/upload speeds are maximum
only (and lots of other caveats and protections for the
provider; none for me, that I recall). But most people
do not read the fine contract, but only see the TV
advertisements for cable with the turtle, or the flyers
in the mail with a cheap price for DSL (so you do not
forget, order before midnight tonight!).

It seems to me is what hurts the ISPs is the accompanying upload
streams, not the download (or at least the ISP feels the same
download pain no matter what technology their end user uses to get
the data[0]). Throwing more bandwidth does not scale to the number
of users we are talking about. Why not suck up and go with the
economic solution? Seems like the easy thing is for the ISPs to come
clean and admit their "unlimited" service is not and put in upload
caps and charge for overages.

  [I've been trying to stay out of this thread, as I consider it
  unproductive, but here goes...]

  What hurts ISPs is not upstream traffic. Most access providers
are quite happy with upstream traffic, especially if they manage their
upstream caps carefully. Careful management of outbound traffic and an
active peer-to-peer customer base, is good for ratios -- something that
access providers without large streaming or hosting farms can benefit
from.

  What hurt these access providers, particularly those in the
cable market, was a set of failed assumptions. The Internet became a
commodity, driven by this web thing. As a result, standards like DOCSIS
developed, and bandwidth was allocated, frequently in an asymmetric
fashion, to access customers. We have lots of asymmetric access
technologies, that are not well suited to some new applications.

  I cannot honestly say I share Sean's sympathy for Comcast, in
this case. I used to work for a fairly notorious provider of co-location
services, and I don't recall any great outpouring of sympathy on this
list when co-location providers ran out of power and cooling several
years ago.

  I /do/ recall a large number of complaints and the wailing and
gnashing of teeth, as well as a lot of discussions at NANOG (both the
general session and the hallway track) about the power and cooling
situation in general. These have continued through this last year.

  If the MSOs, their vendors, and our standards bodies in general,
have made a failed set of assumptions about traffic ratios and volume in
access networks, I don't understand why consumers should be subject to
arbitrary changes in policy to cover engineering mistakes. It would be
one thing if they simply reduced the upstream caps they offered, it is
quite another to actively interfere with some protocols and not others --
if this is truly about upstream capacity, I would expect the former, not
the latter.

  If you read Comcast's services agreement carefully, you'll note that
the activity in question isn't mentioned. It only comes up in their Use
Policy, something they can and have amended on the fly. It does not appear
in the agreement itself.

  If one were so inclined, one might consider this at least slightly
dishonest. Why make a consumer enter into an agreement, which refers to a
side agreement, and then update it at will? Can you reasonably expect Joe
Sixpack to read and understand what is both a technical and legal document?

  I would not personally feel comfortable forging RSTs, amending a
policy I didn't actually bother to include in my service agreement with my
customers, and doing it all to shift the burden for my, or my vendor's
engineering assumptions onto my customers -- but perhaps that is why I am
an engineer, and not an executive.

  As an aside, before all these applications become impossible to
identify, perhaps it's time for cryptographically authenticated RST
cookies? Solving the forging problems might head off everything becoming
an encrypted pile of goo on tcp/443.

Information contained in this e-mail message is confidential, intended
only for the use of the individual or entity named above. If the reader
of this e-mail is not the intended recipient, or the employee or agent
responsible to deliver it to the intended recipient, you are hereby
notified that any review, dissemination, distribution or copying of this
communication is strictly prohibited. If you have received this e-mail
in error, please contact postmaster@globalstar.com

  Someone toss this individual a gmail invite...please!

  --msa

I'm not claiming that squashing P2P is easy, but apparently Comcast has
been successfully enough to generate national attention, and the bandwidth
shaping providers are not totally a lost cause.

The reality is that copper-based internet access technologies: dial-up, DSL,
and cable modems have made the design-based trade off that there is
substantially more downstream than upstream. With North American
DOCSIS-based cable modem deployments there is generally a 6 MHz wide band at
256 QAM while the upstream is only 3.2 MHz wide at 16 QAM (or even QPSK).
Even BPON and GPON follow that same asymmetrical track. And the reality is
that most residential internet access patterns reflect that (whether it's a
cause or contributor, I'll let others debate that).

Generally ISPs have been reluctant to pursue usage-based models because it
adds an undesirable cost and isn't as attractive a marketing tool to attract
customers. Only in business models where bandwidth (local, transport, or
otherwise) is expensive has usage-based billing become a reality.

Frank

In a message written on Mon, Oct 22, 2007 at 08:24:17PM -0500, Frank Bulk wrote:

The reality is that copper-based internet access technologies: dial-up, DSL,
and cable modems have made the design-based trade off that there is
substantially more downstream than upstream. With North American
DOCSIS-based cable modem deployments there is generally a 6 MHz wide band at
256 QAM while the upstream is only 3.2 MHz wide at 16 QAM (or even QPSK).
Even BPON and GPON follow that same asymmetrical track. And the reality is
that most residential internet access patterns reflect that (whether it's a
cause or contributor, I'll let others debate that).

Having now seen the cable issue described in technical detail over
and over, I have a question.

At the most recent Nanog several people talked about 100Mbps symmetric
access in Japan for $40 US.

This leads me to two questions:

1) Is that accurate?

2) What technology to the use to offer the service at that price point?

3) Is there any chance US providers could offer similar technologies at
   similar prices, or are there significant differences (regulation,
   distance etc) that prevent it from being viable?

http://www.washingtonpost.com/wp-dyn/content/article/2007/08/28/AR2007082801990.html

The Washington Post article claims that:

"Japan has surged ahead of the United States on the wings of better wire and more aggressive government regulation, industry analysts say.
The copper wire used to hook up Japanese homes is newer and runs in shorter loops to telephone exchanges than in the United States.

..."

a) Dense, urban area (less distance to cover)

b) Fresh new wire installed after WWII

c) Regulatory environment that forced telecos to provide capacity to Internet providers

Followed by a recent explosion in fiber-to-the-home buildout by NTT. "About 8.8 million Japanese homes have fiber lines -- roughly nine times the number in the United States." -- particularly impressive when you count that in per-capita terms.

Nice article. Makes you wish...

   -Dave

A lot of the MDUs and apartment buildings in Japan are doing fiber to the
basement and then VDSL or VDSL2 in the building, or even Ethernet. That's
how symmetrical bandwidth is possible. Considering that much of the
population does not live in high-rises, this doesn't easily apply to the
U.S. population.

Frank

David Andersen wrote:

http://www.washingtonpost.com/wp-dyn/content/article/2007/08/28/AR2007082801990.html

<snip>

Followed by a recent explosion in fiber-to-the-home buildout by NTT. "About 8.8 million Japanese homes have fiber lines -- roughly nine times the number in the United States." -- particularly impressive when you count that in per-capita terms.

Nice article. Makes you wish...

For the days when AT&T ran all the phones? I don't think so...

According to
http://torrentfreak.com/comcast-throttles-bittorrent-traffic-seeding-impossible/
Comcast's blocking affects connections to non-Comcast users. This
means that they're trying to manage their upstream connections, not the
local loop.

For Comcast's own position, see
http://bits.blogs.nytimes.com/2007/10/22/comcast-were-delaying-not-blocking-bittorrent-traffic/

For an environment that encouraged long-term investments with high payoff instead of short term profits.

For symmetric 100Mbps residential broadband.

But no - I was as happy as everyone else when the CLECs emerged and provided PRI service at 1/3rd the rate of the ILECs, and I really don't care to return to the days of having to rent a telephone from Ma Bell. :slight_smile: But it's not clear that you can't have both, though doing it in the US with our vastly larger land area is obviously much more difficult. The same thing happened with the CLECs, really -- they provided great, advanced service to customers in major metropolitan areas where the profits were sweet, and left the outlying, low-profit areas to the ILECs. Universal access is a tougher nut to crack.

   -Dave

Once upon a time, David Andersen <dga@cs.cmu.edu> said:

But no - I was as happy as everyone else when the CLECs emerged and
provided PRI service at 1/3rd the rate of the ILECs

Not only was that CLEC service concetrated in higher-density areas, the
PRI prices were often not based in reality. There were a bunch of CLECs
with dot.com-style business plans (and they're no longer around).
Lucent was practically giving away switches and switch management (and
lost big $$$ because of it). CLECs also sold PRIs to ISPs based on
reciprocal compensation contracts with the ILECs that were based on
incorrect assumptions (that most calls would be from the CLEC to the
ILEC); rates based on that were bound to increase as those contracts
expired.

Back when dialup was king, CLECs selling cheap PRIs to ISPs seemed like
a sure-fire way to print money.

This doesn't explain why many universities, most with active, symmetric
ethernet switches in residential dorms, have been deploying packet shaping technology for even longer than the cable companies. If the answer was
as simple as upgrading everyone to 100Mbps symmetric ethernet, or even
1Gbps symmetric ethernet, then the university resnet's would be in great shape.

Ok, maybe the greedy commercial folks screwed up and deserve what they got; but why are the nobel non-profit universities having the same problems?

because off the shell p2p stuff doesn't seem to pick up on internal
peers behind the great NAT that I've seen dorms behind? :stuck_out_tongue:

Adrian

Hi All,

I am looking for hosting facilities for about 10-20 racks and Internet transit with good local connectivity in Jordan, can anybody help?

Thanks,
Leigh Porter
UK Broaband/PCCW