UUNet 10Plus

To my knowledge:

                                >-------|
10Plus Cust---|netedge|---DS3---| ATM |
                                > >
10Plus Cust---|netedge|---DS3---|Switch |--ATM--|Cisco 4700/7xx0|--(World)
                                >Cascade>
10Plus Cust---|netedge|---DS3---| |
                                >-------|

I have no idea what the xBR is on the DS3 to the NetEdge.

Men,

From my recollection of NetEdge connections from WorldCom Santa Clara to

MFS at Market St. in San Jose, the NetEdges have to be used in pairs, like:

CPE -- 10baseT/FDDI ---|netedge|--- DS3 ---|netedge|--- 10baseT/FDDI -- switch

In otherwords, the NetEdges act as bridges, which have to be used in a pair
in order to turn the ethernet or FDDI connection into ATM over the DS3 and
back. The NetEdges are programmable, and I'm sure that bandwidth is one of
the things that's configurable.

We used to run these things fairly full and fairly hard for extensive
periods of time. I think we were able to get about 30Mpbs full duplex out
of them. I doubt that dropping packets at ~6Mpbs is the NetEdges' fault
(unless you had really old ones).

The fundamental problem at the upper bound is that you're taking IP,
encapsulating it in ethernet or FDDI, then segmenting and further
encapsulating that (IP inside ethernet/FDDI) inside ATM. The double
encapsulation extracts even more of a tax than the !53 bunch usually
complain about.

In the end, in addition to needing more than the 30Mpbs bandwidth to the
MAE which the NetEdges gave us, the NetEdge solution was more trouble than
it was worth because of our inability to monitor the NetEdges for trouble
(not that they couldn't be monitored, but they were MFS owned gear). We
had to rely on Datanet to tell us what was going on, and many times problem
resolution gave as a cause FWT (fixed while testing), which customers were
always reluctant to accept as cause.

If you're interested in a second opinion, you might try contacting NetEdge
directly.

good luck,
-peter

Those not interested in Ethernet to ATM control D now...

We are trialing a very similar product so I have been following this
thread
closely. Peter's insight is useful, thanks. Before we go too hard on
NetEdge,
however, we should understand that there are a lot of options that are
used
in deploying a service like this. In our case we run the NetEdge in
routed
mode and leave the DS-3 completely open (Deepak suggested UUNet might
also).
This ATM flow then goes directly to our GigaRouter over an ATM PVC (DS-3
to OC-3).
(FYI, we use the EDGE 40 model.)

This eliminates ATM overhead as an issue on the local loop so any loss
would have to be "network" related in the upstream direction. The
upstream
flow is metered by the limitation of the Ethernet access on the customer
premise, and although the EDGE 40 has lots of buffers, I would expect
few
are in use in the upstream direction except to manage SAR/processor
pipeline
delay.

However, this overpowered connection, while generally good for the
customer in the upstream direction may pose a problem for the downstream
direction where an open DS-3 can blast into the Ethernet. (Even if the
DS-3
Local loop is metered (paced) to match Ethernet speeds it should not
matter much
since we will just be pushing the buffering problem around.)

Therefore, the ATM to Ethernet buffer is the key. The EDGE 40 has a
pretty
deep buffer pool (about 2 Mbytes I'm told) so I would expect a pretty
big
burst could be tolerated. I would like to know if any of the
folks trialing the service were able to determine if their
loss/throughput
problems were upstream, downstream or bidirectinal.

Regards,

Mike Gaddis
EVP & CTO
SAVVIS Communications Corporation

Peter Kline wrote:

Men,

CPE -- 10baseT/FDDI ---|netedge|--- DS3 ---|netedge|--- 10baseT/FDDI -- switch

In otherwords, the NetEdges act as bridges, which have to be used in a pair
in order to turn the ethernet or FDDI connection into ATM over the DS3 and
back. The NetEdges are programmable, and I'm sure that bandwidth is one of
the things that's configurable.

That's the connection we have alright, but MFS/UUNet says they cannot
limit the amount of bandwidth on it, and that if they gave us a 100Mbps
handoff off the NetEdge box, then we'd get 100Mbps off it and there was
nothing they could do. My response was why not provision the ATM bridge
to 10-13Mbps, and use that to limit the data throughput? Seems that would
work, but they said no go. Frustrating.

We used to run these things fairly full and fairly hard for extensive
periods of time. I think we were able to get about 30Mpbs full duplex out
of them. I doubt that dropping packets at ~6Mpbs is the NetEdges' fault
(unless you had really old ones).

Yes, it was an old one, and after months of complaining they finally
delivered a new one yesterday morning. It is working MUCH better, but as
soon as the link approaches 6Mbps or more, it starts choking hard.

The fundamental problem at the upper bound is that you're taking IP,
encapsulating it in ethernet or FDDI, then segmenting and further
encapsulating that (IP inside ethernet/FDDI) inside ATM. The double
encapsulation extracts even more of a tax than the !53 bunch usually
complain about.

If you're interested in a second opinion, you might try contacting NetEdge
directly.

Indeed. That's what I plan on doing today... Thanks for the input.

good luck,
-peter

Joe Shaw - jshaw@insync.net
NetAdmin - Insync Internet Services
"Learn more, and you will never starve." - Paraphrase of Lee