jumbo frames

Theoretically increasing the MTU anywhere you are not actually generating
packets should have no impact except to prevent unnecessary fragmentation.
But then again, theoretically IOS shouldn't get buggier with each release.

There will obviously be different packet handling techniques for the
larger packets, and I'm not aware of any performance or stability testing
that has been done for jumbo frames. I'm guessing the people who are
actively using them havn't been putting it in line rate mixed small and
large packets conditions.

Obviously anything extra and uncommon you try to do runs the risk of
setting off new bugs (even common stuff sets off new bugs). I can tell you
some of the drivers I have seen for PC gige cards (especially linux) badly
mangle jumbo frames and may not perform well.

> I've seen a lot of discussion about why one would want to do Jumbo
> frames on your backbone...let's assume for the sake of argument that a
> customer requirement is to support 9000 bytes packets across your
> backbone, without fragmentation.
>
> Why not bump MTU up to 9000 on your backbone interfaces (assuming they
> support it)?
>
> What negative affects might this have on your network?
>
> a) performance delivering average packet sizes
> b) QoS
> c) buffer/pkt memory utilization
> d) other areas

Theoretically increasing the MTU anywhere you are not actually generating
packets should have no impact except to prevent unnecessary fragmentation.

Well, yes.

But let's consider then the impact of forwarding packets into the same pipe
where the sizes are more than two orders of magnitude apart? (60 vs. 9000).

But then again, theoretically IOS shouldn't get buggier with each release.

Let's not focus on the OS being used, and assume it's a well designed OS.
For the sake of argument, let's say it's a simulated OS.

There will obviously be different packet handling techniques for the
larger packets,

Why will there "obviously" be different packet handling techniques?
is there a special routine for packets over 1500 bytes? 4470 bytes?
Or do you mean policy mapping jumbo packets into dedicated queues?

and I'm not aware of any performance or stability testing
that has been done for jumbo frames. I'm guessing the people who are
actively using them havn't been putting it in line rate mixed small and
large packets conditions.

Probably not, but if you take a person who is using them and let them use
your backbone as a point to point network, you start to see that...not at
line rate (preferably, anyway) but certainly large quantities of small
packets mixed in with some really big ones.

Where your possible QoS impact comes in is where you have jumbo packets
sitting in the best effort queue, and small frames sitting in your priority
queue, you have some queue contention. This doesn't happen so much as a
result of having the large packet there, but rather because your minimum queue
size has to be of sufficient size to carry the packet. The linecard is
going to spend more time draining the best effort queue while the priority
queue gets stacked.

Even without QoS, what do jumbo frames do to overall buffer utilization?
(I would imagine nothing, since I suspect this is a function of line speed,
not packet size).

Dave

> I've seen a lot of discussion about why one would want to do Jumbo
> frames on your backbone...let's assume for the sake of argument that a
> customer requirement is to support 9000 bytes packets across your
> backbone, without fragmentation.
>
> Why not bump MTU up to 9000 on your backbone interfaces (assuming they
> support it)?
>
> What negative affects might this have on your network?
>
> a) performance delivering average packet sizes
> b) QoS
> c) buffer/pkt memory utilization
> d) other areas

Theoretically increasing the MTU anywhere you are not actually generating
packets should have no impact except to prevent unnecessary fragmentation.
But then again, theoretically IOS shouldn't get buggier with each release.

Well, the way it oughta work is that the backbone uses the same MTU as
that of the largest MTU of your endpoints. So, for example, you have a
buncha hosts on a fddi ring running at 4470, you want to make sure
those frames don't have to get fragmented inside your network. Idealy,
all hosts have the same MTU and no one has to worry about that, but in
practice, it seems to be better to push the fragmentation as close to
the end user as possible. (That is, if a user on a 1500MTU link makes
a request to a host on a 4470 link, the response is 4470 up until the
user's end network.) Of course, path MTU discovery makes this a moot
point. The conversation will be held in 1500 byte fragments.

That brings up another interesting question... Everything in the
backbone these days thats DS3 and up talks at 4470. But ethernet
(gig-e, etc), T1s, and dial lines still talk at 1500. I wonder if
there are any paths that exist at 4470 all the way through. (probably,
but probably rare.)

What I've said for some time now is that I would like to see hosts
abandon the 1500 byte MTU and move to something larger in the
interests of efficiency (prefferably 4470 and multiples thereof so we
can actually establish a "rule of thumb" for larger MTU sizes.) Its
not much, I grant you, but with increasingly higher bandwidths
availible to the average user, every little bit helps.

There will obviously be different packet handling techniques for the
larger packets, and I'm not aware of any performance or stability testing
that has been done for jumbo frames. I'm guessing the people who are
actively using them havn't been putting it in line rate mixed small and
large packets conditions.

Well, the problem with buffering 9k packets is that it doesn't take
many of them to bloat a queue. If you're talking links that pass tens
of thousands of packets per second, if you want to have 0.25 seconds
of buffer space it takes a lot of memory.

Obviously anything extra and uncommon you try to do runs the risk of
setting off new bugs (even common stuff sets off new bugs). I can tell you
some of the drivers I have seen for PC gige cards (especially linux) badly
mangle jumbo frames and may not perform well.

<soapbox>

"jumbo frames"?

Hmm

"tag-switching mtu 1518"?

Hmm.

"Label stacking"?

Hmm.

</soapbox>

-Wayne

> There will obviously be different packet handling techniques for the
> larger packets,

Why will there "obviously" be different packet handling techniques? is
there a special routine for packets over 1500 bytes? 4470 bytes? Or
do you mean policy mapping jumbo packets into dedicated queues?

Generally, yes. There are often different memory pools for different sized
packets.

> and I'm not aware of any performance or stability testing
> that has been done for jumbo frames. I'm guessing the people who are
> actively using them havn't been putting it in line rate mixed small and
> large packets conditions.

Probably not, but if you take a person who is using them and let them
use your backbone as a point to point network, you start to see
that...not at line rate (preferably, anyway) but certainly large
quantities of small packets mixed in with some really big ones.

My gut instinct is that there are very few people who have fully tested
their products at close to line rate speeds with a full mix of small and
jumbo frames, but I'd love for someone to prove me wrong.

Where your possible QoS impact comes in is where you have jumbo
packets sitting in the best effort queue, and small frames sitting in
your priority queue, you have some queue contention. This doesn't
happen so much as a result of having the large packet there, but
rather because your minimum queue size has to be of sufficient size to
carry the packet. The linecard is going to spend more time draining
the best effort queue while the priority queue gets stacked.

Even without QoS, what do jumbo frames do to overall buffer
utilization? (I would imagine nothing, since I suspect this is a
function of line speed, not packet size).

True, if you're running a very congested gige pipe with jumbo frames
taking a large amount of the traffic the smaller frames may be delayed.
Many jumbo frame implementations have seperate queues for regular vs jumbo
frames, I know at least the Alteon products do. I'm not sure if anyone is
intelligently using this to provide a WFQ-like service though.

Well, the way it oughta work is that the backbone uses the same MTU as
that of the largest MTU of your endpoints. So, for example, you have a
buncha hosts on a fddi ring running at 4470, you want to make sure
those frames don't have to get fragmented inside your network. Idealy,
all hosts have the same MTU and no one has to worry about that, but in
practice, it seems to be better to push the fragmentation as close to
the end user as possible. (That is, if a user on a 1500MTU link makes
a request to a host on a 4470 link, the response is 4470 up until the
user's end network.) Of course, path MTU discovery makes this a moot
point. The conversation will be held in 1500 byte fragments.

Fortunantly hosts on FDDI rings are rare these days, but I'd love to see a
modern analysis of the packet sizes going through the internet (everything
I've seen comes from the days when FDDI roamed the earth).

From everything I've seen out of IEEE, they continue to view Ethernet as a

"LAN Standard" and don't really want to consider its use in the core, even
for 10GigE. As long as the creation of 99.999% of packets is <= 1500
bytes, and the links which pass packets are equal or greater, noting
really nasty happens. The argument is that "most people won't really
benefit from it, and it will introduce incompatibilities in MTU size, so
why should it be a standard", which misses the potential use in WAN links.

I don't expect to see many hosts w/10GigE cards for a while, but it would
be nice if Path MTU Discovery was a bit better.