bufferbloat videos are up.

If people have heard of bufferbloat at all, it is usually just an
abstraction despite having had personal experience with it. Bufferbloat
can occur in your operating system, your home router, your broadband
gear, wireless, and almost anywhere in the Internet. Most still think
that if experience poor Internet speed means they must need more
bandwidth, and take vast speed variation for granted. Sometimes, adding
bandwidth can actually hurt rather than help. Most people have no idea
what they can do about bufferbloat.

So I�ve been working to put together several demos to help make
bufferbloat concrete, and demonstrate at least partial mitigation. The
mitigation shown may or may not work in your home router, and you need
to be able to set both upload and download bandwidth. People like Fred
Baker, with fiber to his house and Cisco routers, need not pay attention....

Two of four cases we commonly all suffer from at home are:

1. Broadband bufferbloat (upstream)
2. Home router bufferbloat (downstream)

Rather than attempt to show worst case bufferbloat which can easily
induce complete failure, I decided to demonstrate these two cases of
�typical� bufferbloat as shown by the ICSI data. As the bufferbloat
varies widely as theICSI data
your mileage will also vary widely.
There are two versions of the video:

1. A short bufferbloat video
    <http://www.youtube.com/watch?v=npiG7EBzHOU>, of slightly over 8
    minutes, which includes both demonstrations, but elides most of the
    explanation. It�s intent is to get people �hooked� so they will want
    to know more.
2. The longer version of the video
    <http://www.youtube.com/watch?v=-D-cJNtKwuw>clocks in at 21 minutes,
    includes both demonstrations, but gives a simplified explanation of
    bufferbloat�s cause, to encourage people to dig yet further.

Best regards,
Jim Gettys

Good visualisation. Just a little nitpicking, 802.11 is 54 megabit, not megahertz. It should also be pointed out that 802.11 is half duplex, which might affect things.

Also, you might want to point out in your material that large buffers are there to handle bursts on TCP sessions over high RTT. Your suggestion to improve interactive performance hurts high speed TCP high RTT sessions. This is probably what most people want to do, but it would be good to point it out. Doing a promotion for ECN support in equipment would also be good, because introduing WRED with high drop probability a low buffer fill will really hurt performance for TCP transfers. ECN will help to avoid restransmissions, which just wastes bw.

Where does your 100ms buffer size recommendation come from? The classical one is 2xRTT, with a lot of platforms developed around 2000 sized at 600ms of buffering (because 300 ms RTT seems like a decent value to choose for "max RTT" I guess). At megabit speeds I'd say to achieve your goal having 100ms FIFO buffering is too high anyway, so to handle your problem you need "fairqueue" to look at flows and put persistent buffer filling TCP sessions in "the background". This would also mean TCP would be able to use full bw without hurting interactivity.

Also, for some operating systems (Linux is the one I know about), there is a tendency to have high buffers not only in the IP stack, but also high FIFO buffers towards the hardware, in the device driver. I engaged the linux-usb mailing list about this, and I did see some talk that indicated that people understood the problem.

So basically I agree with your problem statement, however I think it would be benficial if your proposed solution was a bit more specific, or at least pointed more in that direction. To propose a solution that sounds more like "limit buffers to 100ms or less and everything will be fine" would indeed remove some of the problem, but it would hurt performance for some applications.

The problem you're describing has been know for 25 years, unfortunately not by the right people in the business, especially the ones making high volume low cost home equipment.

The key to the solution is better Adaptive Queue Management, or AQM. As long as we have to decide on fixed queue sizes for all traffic, we're forced to cater to the most common traffic type.

It would be nice to put queues of different RTT into different queues. Today that is basically impossible.