These NetEdges seem to have three different possible operating states:
completely working (which doesn't happen often enough); broken (often, right
out of the box); and kind of working (which happens all too often).
That these things work at all under load is nothing short of miraculous.
There are plenty of mixed-media-bridging devices that have been tried
and which have failed miserably in the past, most notably:
-- Magnum 100s (ethernet<->funny framing<->ethernet),
which don't meet the standard IFG specification;
jam too many back-to-back packets at them, and it will lose
most of a burst of traffic. This hurt the old MAE-EAST.
-- various ADSUs (FR<->ATM<->FR),
which lack buffering and perform SAR too slowly,
which delayed the PAC*Bell and Ameritech NAPs for months
and some of which still exhibit flakiness under load
-- a horrible idea (ethernet<->MFS ATM<->ethernet)
which didn't work under load; perhaps a veteran
user of these neato little things could explain
the failure mode to anyone interested. I remember
one service provider who had a national "10Mbps"
ethernet backbone who had various horrible problems
including the breakdown of the LIS (such that some
routers couldn't talk to others), connections
going simplex, frame loss and other wonderful things
-- another horrible idea (FR<->MFS ATM<->FR)
this was pretty neat; some of the problems above
plus a brand new problem: the ADSU would strip the
FR frame checksum, perform SAR, send the cells, and
the ADSU on the opposite end would reassemble the
frame and produce a correct FR checksum. All fine
and dandy, unless cells arrived out of order, SAR
was done wrong, or there was data corruption under
load in the DSU or in the network. This interesting
technology advanced the state of IS:IS in one vendor's
software rather considerably
-- mixed-media bridging (NetEdges, FDDI/Ethernet bridges)
these break in all sorts of interesting ways.
In particular, NetEdges have an annoying habit of
confusing FDDI stations in particularly toxic ways,
and some FDDI/Ethernet bridges resemble roach motels:
frames check in but they don't check out.
Essentially, most of these things worked mostly perfectly under
low load, but when faced with the kind of traffic one sees at a
busy exchange point, most bridging technology has failed in really
awkward ways.
My advice is that if you can avoid talking to something across
a bridge at an exchange point, you should do so. The keep-it-simple
principle is a bunch more expensive, but probably not as expensive
as a very public failure.
Finally, why is it that most vendors never test their products in
a serious battlefield environment like an ISP of size medium to huge?
These places tend to be excellent worst-case testing grounds.
Sean.