Inter-exchange media types

aSince large amounts of traffic on the Net orginates from
modems which are typically plugged into terminal servers, which
virtually all have ethernet interfaces, very large
amounts of internet traffic have MTUs smaller than the

The locations I know of that have FDDI and state the need
for large MTUs are the large sites, that often have discount
T-3 service provided at by CO-REN or direct federal subsidies.

(Also dialup traffic would seem to be the eara of most rapid
growth). I know there was some work done at the Sprint Nap
at one point doing traffic research, but don't know if it
included any type of size historgram.

I've had several people assert that FDDI frame sizes are in fact common, or that
at least DS-3 connected customers desire this (who are these people again?)

I won't say it isn't true, because I don't have any real data, but
I don't see any evidence that anyone else has any idea either.
(With stuff plugged into the gigaswitch, I don't see a real easy
way to find out either, perhaps we could file a FIOA with the NSA :slight_smile: )

I believe the folowing to be true:
1. If there is little traffic over 1500 MTU, then
  switched, 100mbps, full duplex ethernet, will be cheaper
more scaleable, and perform better than switched full duplex FDDI.
  A. Ethernet hardware is more common, thus greater economies of scale.
  B. Cisco's have full duplex ethernet now.
  C. The FEP card has TWO 100 mbps ports compared to one FDDI. (and
  costs less)
  D. I feel certain that far more packets have been switched
in Cisco Cat 5000s, that Dec Gigaswitches, because real line
networks other than the Internet also use these. Cisco might
could provide some sales numbers to compare with Dec if anyone
is interested.
  E. If you have lots of 10mbps switched connections going
into the FDDI, you have a addtional overhead, of the translational

I'd still like to see a number that shows I am wrong, even if
it is not very meaningful.


I think Peter was too brief to be understood by all. Let me try
to expand on his major point (buffering requirements). First,
however, to this:

Since large amounts of traffic on the Net orginates from
modems which are typically plugged into terminal servers, which
virtually all have ethernet interfaces, very large
amounts of internet traffic have MTUs smaller than the

[Continues argument in the line of "if little traffic uses more
than 1500 bytes MTU, ethernet will be better/cheaper/etc."]

I would claim that the average packet size doesn't really matter
much -- the average packet size is usually in the order of 2-300
bytes anyway. However, restricting the MTU of an IX to 1500
bytes *will* matter for those fortunate enough to have FDDI and
DS3 (or better) equipment all the way, forcing them to use
smaller packets than they otherwise could. Some hosts get
noticeably higher performance when they are able to use FDDI-
sized packets compared to Ethernet-sized packets, and restricting
the packet size to 1500 bytes will put a limit on the maximum
performance these people will see. In some cases it is important
to cater to these needs.

The claim that switched fast full-duplex Ethernet will perform
better than switched, full-duplex FDDI for small packets doesn't
really make sense -- not to me at least. I mean, it's not like
FDDI doesn't use variable-sized packets...

Now, over to the rather important point Peter made. In some
common cases what really matters is the behaviour of these boxes
under high load or congestion. The Digital GigaSwitch is
reportedly able to "steal" the token on one of the access ports
if that port sends too much traffic to another port where there
currently is congestion. This causes the router on the port
where the token was stolen to buffer the packets it has to send
until it sees the token again. Thus, the total buffering
capacity of the system will be the sum of the buffering internal
to the switch and the buffering in each connected router. I have
a hard time seeing how similar effects could be achieved with
ethernet-type switches. (If I'm not badly mistaken, this is a
variant of one of the architectural problems with the current ATM
based IXes as well.)

Thanks to Curtis Villamizar it should be fairly well known by now
what insufficient buffering can do to your effective utilization
under high offered load (it's not pretty), and that the
requirements for buffering at a bottleneck scales approximately
with the (end-to-end) bandwidth X delay product for the traffic
you transport through that bottleneck.

So, there you have it: if you foresee that you will push the
technology to it's limits, switched ethernet (fast or full
duplex) as part of a "total solution" for an IX point seems to be
at a disadvantage compared to switched FDDI as currently
implemented in the Digital GigaSwitch.

This doesn't render switched ethernet unusable in all
circumstances, of course.


- Havard