30%, huh? (Re: Questions about Internet Packet Losses )

Warning: this isn't about north american network operations! Hit "D" now.

I wish I had a background in "fractal mathematics" or whatever, but instead
I've just got my gut feelings to go by. They say that 30% loss translates
loosely to 30% overcommitment. As Tony and others have remarked here today,
TCP does not overcommit enough once it's up and running to account for 30%
loss. And given slow start, I'm not sure it could overcommit that much even
given HTTP 1.0's tendancy to start a lot of new connections.

So if we are seeing 30% overcommit, my gut feeling is that most TCP endpoints
aren't using slow start. Even given blasty UCP-based multimedia protocols,
a bunch of TCPs sharing an exchange point with such traffic would back off
and be polite. 30% overcommit doesn't jibe with cooperating slow-start TCPs
no matter how many new connections start up. I mention this only because the
IETF had a huge flame war over whether slow start should be a recommended
standard or not since it was just an implementation detail or so said some.

TCP creates a prisoner's dilemma problem for implementors, and it's sad that
more than one has said "let's not cooperate so that we can get more bit times."
I saw an ad for one company who advertised that their TCP stack was 200%
faster than the rest of them. Given that "the rest of them" start from BSD
and virtually all try to cooperate and virtually all run at wire speed if
there is no contention, I am guessing that this TCP stack went "faster" by
being more aggressive.

As a good friend likes to tell me: "The world is full of clowns."