MTU - at least this makes a little bit of sense.. If they're doing
HTTP/1.0 stuff with parallel connections then a smaller MTU is going
to make that parallelization latency much more effective and perceived
performance will go up some.. it doesn't impact full document
retrieval time though (at least not positively!).. are dial links
really lossy enough that chopping the segment size to 1/3 is a big win
in retransmit time or are the win95/98 stacks really braindead enough
that they don't do pmtud so are just trying to dodge fragmentation? I
found it really odd that  which I use all the time to track
features in a myriad of shipped OS's actually has a blank entry for
pmtud on both of those (neither yes nor no..)
The perception of speed is likely to propogate the concept well. If
it looks like it works, they'll pass it along to someone else. I do
suspect that if at any point along the way, memory is tight, a smaller
packet has a better chance of not falling on the floor.
RWIN - this is the one that boggles my mind.. it gets set way way way
down by the above mentioned tools.. I've seen it as low as 2500 bytes
recently. Anyone have any insight into the value of pushing this all
the way down? The web pages generally mumble about capping the amount
of data that needs to be resent in case of a failure.. which is of
course true in the extreme case, but I'd much rather have the
congestion window providing the throttle than the hard-limit of rwin
that can just cap transfer rates on you.. about the only reason I can
think of for small RWINs is to conserve the buffer space, but it sure
seems worth a few K to me to be sure I can work with high latency
links. You could argue that 3 or 4 K is sufficient for any reasonable
latency that is bottlenecked by a modem's throughput.. and eventually
I might give in (or maybe not ;)).. what I don't get is why this
results in any kind of perceived performace increase on the part of
the user under any condition.. It almost implies that TCP congestion
control is too conservative, although almost all work on that
indicates it's a little too aggressive (which would be the side to err
on..) Any thoughts?
When a given image hit connects, RWIN bytes will be sent and the sender
will wait. If RWIN is big, one image loads more and others load less
for the amount of time that RWIN bytes take to come across the modem.
For a small RWIN, the parallelism increases and the perception of speed
does as well.
When evaluating this, do keep in mind the large number of parallel
connections for large numbers of images. Setting the connection limit
low makes the perception of speed go down. With more connections, it
appears to go up ... unless RWIN is high enough to still affect it on
a per connection basis.
I think a lot of the design of TCP simply never considered the cases
of tens to maybe over a hundred parallel connections funneling through
a thin pipe at one end. Back then, who would have thought of what we
The made progressive loading images for a reason.