With all of the problems with MAE-EAST.....
Any plans from anyone to create a ATM exchange point in the DC area?
-Adam Hersh
With all of the problems with MAE-EAST.....
Any plans from anyone to create a ATM exchange point in the DC area?
-Adam Hersh
Given the latency we've seen over some ATM backbones, I certainly hope not. I
agree with Paul that MTU sizes in gigabit ethernet make it very
attractive as a next step, I'd certainly be interested in seeing anything
anyone has on fragmentation related delays going from a large MTU switched
environment to a small MTU switched environment though. Overall, I've
been less-than-pleased with latency when I've been transited that way.
Paul
I can't think of a reason why somebody would do that.
All of the problems that mae-east is having is a clear sign that
providers need to move their big peers off of the public fabric and
onto private interconnects. Aside from the engineering problems, it
just seems so difficult to justify having such an important part of a
backbone's existance controlled by a third party. As these recent
problems have illustrated, either that third party is either not
capable or not willing to fix the problems. Either way, the end
result is the same.
Alec
Adam Hersh wrote:
With all of the problems with MAE-EAST.....
Any plans from anyone to create a ATM exchange point in the DC area?
-Adam Hersh
Don't know.... However, OneCall has an (6Gbs) ATM switch in there
(MAE-E), if anyone would like pull physical local, we could exchange
without all the FDDI distortion . I have support for UBR, VBR-RT,
VBR-NRT, and CBR.
We have a few OC3 ATM ports, and 1 DS3 ATM port available.
More, if there is demand...
This is a LS1010, not one of our *big* switches,
but, it could be..... if enough people wanted it.
Ports are *free*, (If you peer with us), but, MFS local physical
rules apply.
BTW: Anyone pulling over, can exchange across the backplane between
*each other*, no extra charge (Until MAE-E gets better ;).
Email US: rirving@onecall.net
Alec H. Peterson wrote:
> With all of the problems with MAE-EAST.....
>
> Any plans from anyone to create a ATM exchange point in the DC area?I can't think of a reason why somebody would do that.
All of the problems that mae-east is having is a clear sign that
providers need to move their big peers off of the public fabric and
onto private interconnects. Aside from the engineering problems, it
just seems so difficult to justify having such an important part of a
backbone's existance controlled by a third party. As these recent
problems have illustrated, either that third party is either not
capable or not willing to fix the problems. Either way, the end
result is the same.
I don't think that the problems of MAE-EAST necessarily indicate
problems with the exchange point concept in general. Others have
indicated satisfaction with exhange points run by other parties. What
the problems of MAE-EAST indicate to me is that FDDI/GigaSwitches are
not the appropriate technology for MAE-EAST sized exchanges and that MFS
is the wrong company to be running an exhange point.
It would appear to me that there is an opportunity for a new player to
get into the exchange point game.
Since ATM seems to be a controversial choice to build exchange points,
has anyone thought about using fast ethernet with an eventual transition
to gigabit ethernet when gigabit ethernet becomes more mainstream?
Jeff
I don't really fault MFS with these problems. Show me another
exchange that is trying to pass as much traffic as MAE-east is and
functioning properly.
Plus which, I don't know of any major players who are overly happy
with public exchanges. That's why they are going to private
interconnects.
Alec
Yes, but it will not be airplane proof, and will be in a parking garage.
-Nathan
Nathan Stratton wrote:
Yes, but it will not be airplane proof, and will be in a parking garage.
Speaking of airplane proof, has anyone bought that surplus nuclear
missile silo in Texas and turned it into an exchange point?
Jeff
YES I agree 100%, but people still like to be at a public NAP. It has
turned into a sales thing, and people will pay to be at one.
-Nathan
Alec H. Peterson wrote:
I don't really fault MFS with these problems. Show me another
exchange that is trying to pass as much traffic as MAE-east is and
functioning properly.
CHI is pretty close. Also, check Pac-Bell.
Smooth surf.
Richard Irving wrote:
Paul D. Robertson wrote:
>
>
> > With all of the problems with MAE-EAST.....
> >
> > Any plans from anyone to create a ATM exchange point in the DC area?
>
> Given the latency we've seen over some ATM backbones, I certainly hope not. I
> agree with Paul that MTU sizes in gigabit ethernet make it very
> attractive as a next step,
Our nets *Scream*.
> > I'd certainly be interested in seeing anything
> anyone has on fragmentation related delays going from a large MTU switched
> environment to a small MTU switched environment though. Overall, I've
> been less-than-pleased with latency when I've been transited that way.
>
Uh... Doctor , it hurts when I do this.
Doctor: So don't do that!
Hint, it is *pretty* foolish to run mixed MTU's on switched fabric.
*Most* people on ATM exchanges run with common MTU's.
Paul,
I have not spoken with you before, so I do not know if your
posting below is meant in a literal, nonfacetious manner.
> With all of the problems with MAE-EAST.....
>
> Any plans from anyone to create a ATM exchange point in the DC area?
For what it's worth, I do understand that there is a plan to create
an ATM exchange point in the DC area, at speeds exceeding those
currently available.
Given the latency we've seen over some ATM backbones,
The latency increased in network areas that are switched is generally
(by all but the zealots) given to be less than that of comparable
layer three data moving topologies.
The latency induced by several providers claiming an ATM backbone
is generally attributable to an error: they leave off one important
word -- shared -- . The latency about which I assume you speak is
caused by large amounts of queuing. This queuing is demanded by network
oversubscription. The latency introduced by the oversubscription
is consistent with any oversold network.
Bear in mind, the introduced latency is not something that they
directly control. However, by implementing their Internet
base level transit upon a shared network, such as ATM or FR, the
NSP/OSP is at the mercy of the carrier. And the carrier does so
like to squeeze money out of that capital investment of an ATM
network.
With dedicated circuitry (SONET/SDH) the bandwidth sold is readily
available, in fact fixed.
With shared networks, the bandwidth is available on a 'best effort'
nature, withstanding rare altruistic CIR and SCR implementations.
An ATM Exchange Point implemented by a third-party should (and in
one case about which I have knowledge, most assuredly WILL)
disallow oversubscription within the L2 shared network.
This means that the guaranteed BW really is guaranteed, and the
latency induced by oversubscribing links will be 0, because the
links will not be oversubscribed. (except maybe interswitch trunks
[which is part of what DEC offered in Pandora's box])
I certainly hope not. I
agree with Paul that MTU sizes in gigabit ethernet make it very
attractive as a next step, I'd certainly be interested in seeing anything
anyone has on fragmentation related delays going from a large MTU switched
environment to a small MTU switched environment though.
Yes, this would be interesting.
Overall, I've
been less-than-pleased with latency when I've been transited that way.
I don't understand, the MTU with which I'm most familiar on the
ATM SAR Frame is 4470. How does this induce latency as a
function of fragmentation?
Fragmentation occurs when a PDU is larger than the available PDU
space in the MTU of the transit or destination media, no?
-a
Noemi Berry wrote:
> > > I'd certainly be interested in seeing anything
>> > anyone has on fragmentation related delays going from a large MTU
switched
>> > environment to a small MTU switched environment though. Overall, I've
>> > been less-than-pleased with latency when I've been transited that way.
>> >
>
> Uh... Doctor , it hurts when I do this.
>Doctor: So don't do that!
>
> Hint, it is *pretty* foolish to run mixed MTU's on switched fabric.
> *Most* people on ATM exchanges run with common MTU's.Getting to my NANOG mail kinda late here....
I will say. *grin*
But WHAT are you guys talking about WRT to MTUs through an ATM switch?
The MTUs almost always refer to the layer 3 and up MTUs, which the ATM
switches themselves have no visibility to.
We are discussing Router ATM cards. (Layer 2/3)....
It's righter to say that
most people on *any* exchange run with common MTUs.
Don't I wish.
If you're going from a "large MTU switched environment" to a "small MTU
switched environment", then you must be passing through a router or some
other device that understands and sets its own MTU. The ATM layer has
nothing to do with that. MTU issues are independent of what layer 2
technology is used to connect the peers.
The point ? You would have a hard time setting an MTU of 4470 on
ethernet...
The object of the game ? If you must fragment, do it *after*
transmission. (If possible)
(Less WAN bandwidth than otherwise)
I know a thing or two about ATM, but not NAPs, so please correct me if my
terminology is screwy and I missed something.
Try reading RFC-1626. We are talking about MTU for the IP PLCP card
into
the ATM fabric. FDDI to ATM to Ethernet, try it , you won't like it. PS
Don't
forget ATM is not the only switched fabric in town....
You will find there is a whole PDU and SDU maxsize declaration for
SAR to use on
AAL5 based framing. However, these are *not* the limits you want to be
using.....
I know non-mixed MTU's seem obvious, but some don't get it. On the
other hand,
I have neighbors who have the CHI-NAP fabric declared as /24,
and *cannot* be made to understand why this is bad.
(One Quote: Its the internet, I can do as I want! )
Sigh........
Cheers !
Noemi Berry wrote:
> Our nets *Scream*.
>Another question, in response to another email you'd sent to NANOG
and I subsequently deleted:
:(
You mentioned that your ATM switch at MAE-EAST supports CBR, VBR-RT,
VBR-NRT and UBR. What ATM-connection type do you use to connect peers?
Well at MAE-E , FDDI (We have a pending deal to move some exchange
neighbors
to direct ATM OC-3/DS3, to route around the MAE)
However, passing back into the our national fabric, we tend to run
VBR-NRT, with
a few UBR's, and a few RT-VBR tunnels.
Actually, I'm probably misunderstanding what you use your ATM switch
for. What is its function at the NAP, anyway?
It becomes an Exchange Fabric in and of itself. See Ameritech VNAP,
and OneCall's IndyX
offerings.