MTU of the Internet?

IP packet size distribution (38569M total packets):
  1-32 64 96 128 160 192 224 256 288 320 352 384 416 448 480
  .000 .400 .046 .016 .018 .012 .008 .009 .011 .012 .006 .007 .005 .004 .004

   512 544 576 1024 1536 2048 2560 3072 3584 4096 4608
  .010 .006 .120 .000 .099 .197 .000 .000 .000 .000 .000

Note that if every 1536 packet became 3 paskets and every 2048 packet
became 4 packets that would increase the packet count by 80% !!!


Unfortunately this will be rather impossible with the MTU currently
set to 1536 the 2048 byte packets must be coming from another source
(router to router connections?). So the number of 2048 packet will
stay the same, but the 1536 ones will triple (adding ~20% more packets
in this scenario).

The big win (from the users perspective) is that web pages update
quicker. Instead of sitting there waiting for a large packet to come
in before a page is partially updated the user could have 2/3 of the data
already onto the page. While it might not be "the right thing to do"
from the perspective of helping the "internet" function the perceived
speed by the end user is higher and that is part of the battle we are
all fighting (witness the coming deployment of more localized caches).

ken emery

<lots of speculation about packet sizes and mtus and the occasional
bash at pmtud and microsfot thrown in for good measure>

recently i was doing a fair amount of image-intensive web browsing
from home. the last-hop dialup line was 28.8.
was on a win95 machine with ftp software's stack.

i was getting real bad performance.
aggregate receive 'goodput' on the order of 10s or 100s of
bytes/second. but the modem lights were constantly on.

so i fired up the ol packet monitor
and watched the packets.

i saw many tcp connections open all at once.
i saw many many many retransmissions.
i saw many duplicate acks going back.
transmissions in both directions were very very bursty
it was ugly.

so, the line was running at full speed, but most of what came
across was useless glop.

my theory was that it was a classic case of connection fratricide.

in short, i'd open a web page and get umpity-poo connections established,
one for each image, etc, etc. they'd all be in slow start and not
interfere with each other. they'd get a reasonable rtt and then start
cranking out the data. they'd all do this at once. implosion
of packets onto the poor dialup server. dialup server q-length
grows. that reasonable rtt gets blown all to hell. the webserver
misses some acks. it resends. even though all the connections
start slowing up, there still are umpity-poo connections
all (slowly) dumping packets into the dialup server. lots
of retransmissions come across. which i constantly re-ack.
then things go quiet a bit. then the cwnd opens up again.

so, to test the theory, i crank down the number of connections
that netscape will open up (an older version of navigator will
let you do this. i haven't figured out how to do it in the newer
browsers). i crank it down to 2. goodput goes _way_ up. i consistantly
get over 3kbytes/sec(24+kb) on the 28.8 line (as opposed to
maybe 500bytes/sec...).

i look at the packet trace then. nice beautiful packet-in, ack-out,
packet-in, ack-out, series. seq-nums going up nicely as they should.
no retransmissions.

note well that _none_ of this has to do with pmtu. none of this has
to do with the oft-proclaimed poor quality of m.s. software.

moral of the story
theorizing is wonderful
but a good packet trace beats a theory every time

and understanding how the protocols are _supposed_ to work
and then trying to figure out why they don't is a lot more
productive than bashing mammoth-software-corporations,
though a lot harder

frank kastenholz