Can P2P applications learn to play fair on networks?

Rep. Boucher's solution: more capacity, even though it has been
demonstrated many times more capacity doesn't actually solve this
particular problem.

That would seem to be an inaccurate statement.

Is there something in humans that makes it difficult to understand
the difference between circuit-switch networks, which allocated a fixed
amount of bandwidth during a session, and packet-switched networks, which
vary the available bandwidth depending on overall demand throughout a
session?

Packet switch networks are darn cheap because you share capacity with lots
of other uses; Circuit switch networks are more expensive because you get
dedicated capacity for your sole use.

So, what happens when you add sufficient capacity to the packet switch
network that it is able to deliver committed bandwidth to all users?

Answer: by adding capacity, you've created a packet switched network where
you actually get dedicated capacity for your sole use.

If you're on a packet network with a finite amount of shared capacity,
there *IS* an ultimate amount of capacity that you can add to eliminate
any bottlenecks. Period! At that point, it behaves (more or less) like
a circuit switched network.

The reasons not to build your packet switched network with that much
capacity are more financial and technical than they are "impossible." We
"know" that the average user will not use all their bandwidth. It's also
more expensive to install more equipment; it is nice when you can fit
more subscribers on the same amount of equipment.

However, at the point where capacity becomes a problem, you actually do
have several choices:

1) Block certain types of traffic,

2) Limit {certain types of, all} traffic,

3) Change user behaviours, or

4) Add some more capacity

Come to mind as being the major available options. ALL of these can be
effective. EACH of them has specific downsides.

... JG

Changing the capacity at different points in the network merely moves
the congestion points around the network. There will still be congestion
points in any packet network.

The problem is not bandwidth, its shared congestion points.

Don't share congestion points: bandwidth irrelevant.
Shared congestion points: bandwidth irrelevant.

A 56Kbps network with no shared congestion points: not a problem
A 1,000 Terabit network with shared congestion points: a problem

The difference is if there is shared congestion points, not the bandwidth.

If you think adjusting capacity is the solution, and hosts don't voluntarily adjust their demand on their own, then you should be *REDUCING* your access capacity which will move the congestion point closer to the host.

However, I think a better idea instead of trying to eliminate all shared congestion points everywhere in a packet network would be for the TCP protocol magicians to develop a TCP-multi-flow congestion avoidance which would share the available capacity better between all of the demand at
the various shared congestion points in the network.

Isn't the Internet supposed be a "dumb" network with "smart" hosts? If the hosts act dumb, is the network forced to act smart?