Does anyone have any experience with peering at this NAP, the doc's I read say I can connect to the NAP in three flavors ATM, POS, and Ethernet. But the technical side of the house tells me I must co-locate and can only connect via ethernet for peering.
Also any way to find out who is currently peering today, their web page doesn't tell me much. I would also be curios to know if there is a peermaker type site or how elegant it is to setup peering with someone at this NAP.
Currently, it is an Ethernet NAP, similar to PAIX. I doubt you'll see any
more ATM NAPs. Basically, you can pull whatever kind of ckt you want into
the facility, then terminate it on your own router. Then you run Ethernet to
the switching fabric.
I don't think they have a peermaker app, but it has been discussed for NOTA,
so I'm sure they'll build one if there is demand. The only real reason to
have one at an ethernet NAP is for building private VLANs, which may or may
not get heavily used. As NOTA is run as a collaboration between Terremark
and an industry consortium (which you become a member of when you come into
the NAP), they tend to be quite responsive.
BTW, the alternative to placing your equipment at a NAP like this is to get
a metro ethernet like from someone like Yipes or Telseon, and locate your
gear somewhere else. Epik provides metro ethernet into that facility, and
others may as well. Of course, you would still need to have a router
somewhere in the Miami metro area.
I agree that ATM NAPs are probably on their way out over time.
GigE Ethernet switches are a whole lot cheaper than ATM switches
and are currently scaling much better than ATM switches.
For example, at least one vendor has a GigE switch that has
256 GB of non-blocking switch fabric and can support up to
192 GigE ports or 1440 10/100baseT ports (or some mixture in
between if preferable). 10 GigE products will likely be available
from multiple vendors by end of this calendar year.
Folks building Ethernet based NAPs might want to consider
building them with Ethernet equipment that can support non-IEEE-standard
large frame sizes (e.g. 4K is popular with POS-oriented folks;
9K is popular with folks who run server farms) in the interest
of side-stepping PMTU-Discovery/IP-Fragmentation issues. Many
GigE or 10 GigE vendors support the 9K MTU on switched Ethernet
ports (jumbo frames generally won't work if the link is half-duplex,
btw, because of adverse interactions with CSMA/CD timers).
Similarly, folks connecting at such NAPs might want to
find out if large non-IEEE-standard frames are supported in the
switch fabric. If yes, those folks might want to think about
whether they want to deploy IP routers that can support such
large frames (e.g. Juniper's current GigE interface reportedly
supports 9K frames if configured that way) for the same reasons.
Your mileage might vary, this is just something to mull over.
Any flames to /dev/null please.
You get your highend GigE switch, you set it to 4470 MTU. You use either
7200 with PA-GE, or GSR with 1GE (avoid 3GE here, 2450 MTU doesn't cut
it).
Juniper supports jumbos as well. Most vendors do. It works well in my
experience, but my experience is also that most people want to talk using
Ciscos anyway (at least over here).
Apart from the obvious speed issue FDDI worked well. I see no major
difference to what GigE has to offer. The only thing it lacks is the A+B
protection, and I don't see that as such a big issue, I mean, how often is
it the NAP equipment that fails compared to all other fault factors?
We are currently using SRP OC12 in Stockholm with 22 nodes connected, as
an exchange point. My personal opinion is that SRP is no good way to
connect a lot of people with. Packets have to traverse a lot of nodes to
get to where they're supposed to, with all that latency and inefficient
use of bandwidth. Price is also a heavy issue with SRP, especially the
OC48 cards listing at something like $100k and you have to get a GSR to
put it in.
ATM is an option. Also expensive.
GigE is an option. Cheap, efficient, supported by loads and loads of
vendors. Just remember to do jumbos and you're fine.
What else is there? POS, but that doesn't scale very well when you're
going fully meshed. What else is there? I can't think of anything.
Maybe 4470, but some operators prefer ~9K. Different operators
might have different circumstances and hence different opinions.
If I were running an exchange point, I'd configure the exchange
switch for ~9K and leave the choice of MTU on the connecting device
up to my customers -- because that would maximise the potential
customer base and I am a capitalist.
Btw, some old GSR 1 GigE interfaces did not support anything above
IEEE-standard MTU. I infer from your note that there are now some
GSR interface cards that do support larger than IEEE-standard MTU.
I haven't used any of the supposed new cards, but deployed several
of the older ones that didn't support anything above IEEE-standard
MTU on GigE. Or maybe it was an IOS thing that changed in the
meantime.
I just have to speek up that this is all very well and good, but
it's also a good way to make a NAP that doesn't work.
_ALL_ devices on a layer-2 fabric need to have the same MTU. That
means if there are any FastEthernet or Ethernet connected members
1500 bytes is it. It also means if you pick a larger value (4470,
9k) _ALL_ members must use the same value.
If you don't, the behavior is simple. A 9k MTU GigE arps for a
1500 byte FastEthernet host. Life is good. The TCP handshake
completes, life is good. TCP starts to send a packet, putting a
9k frame on the wire. Depending the switch, the switch either
drops it as over MTU for the FastEthernet, or the FastEthernet card
cuts it off at 1500 bytes, and counts it as an errored frame
(typically with a jabber or two afterwards) and no data flows.
A larger MTU is a fine plan, but make sure if you try that anywhere
that the switch is set for the larger sizes and all devices are
capable of that larger frame size, or you're in trouble.
Well, the reasoning "why" is a bit more complex than that... The
TCP handshake will result in the FE host saying "hey, I can do a
max 1460 byte mss". The other host with a larger MTU won't send
larger packets than remote MSS + 40 bytes header over that TCP
connection, end of story.
Now, sure, you certainly have to have agreements between devices
in various contexts, but what is and isn't a "working" configuration
and why is a bit more complex. A can't-go-wrong simplification,
of course, is "always make sure all devices on the same L2 have
the same MTU"...
The two hosts talk, get MSS of 9k (ish), SYN, SYN+ACK, ACK are all
small and pass, the first data packet tries to be 9k, and gets lost
between the fabric and the exit (FE) router.
Yes, if the host talked to the FE device itself MSS should prevent
any issues (eg, for BGP sessions across the exchange).
Only devices exchanging frames with each other need have the same
MTU. One could easily imagine one 802.1q VLAN with MTU N and
a different 802.1q VLAN with MTU Y all on the same switch,
to give a trivial example. Some exchanges already use VLANs
with their GigE switches for a variety of business reasons.
Which is exactly how we've done it with our testbed here in Stockholm. One
VLAN where you are REQUIRED to handle 4470 MTU to be able to join, and one
MTU 1500 vlan. There are some equipment that'll fragment IP when doing L2
forwarding to a lesser MTU port (FE for instance) (one of the old
Cabletron companies) just like the old DEC gigaswitches used to do, but I
prefer to have everybody in the same L2 domain talk the same size MTU.
Seems much cleaner, and most vendors support at least 4470MTU on their
GigE ports these days.
Or at least where you can somehow specify different MTUs for different IPs
within a subnet. Then that could be a part of any BGP peer setup that you
also include the MTU you can handle.
As near as I can tell, nearly every vendor is supporting
~9K MTU on GigE/10 GigE ports. Not all of the vendors with ~9K MTU
support permit configuration of 4470 byte MTU -- a number of them
have binary configuration knobs for ~1500 byte/~9K byte MTU, though
clearly a number of vendors do permit configuration of a 4470 MTU.
In some cases that configuration constraint might just be a software
issue. In other cases, there might be hardware issues. The MTU size
is related to certain other timer parameters and isn't really entirely
an independent variable, btw. Folks buying GigE interfaces might
want to investigate these factors as part of their product evaluations.
I just have to speek up that this is all very well and good, but
it's also a good way to make a NAP that doesn't work.
[...]
If you don't, the behavior is simple. A 9k MTU GigE arps for a
1500 byte FastEthernet host. Life is good. The TCP handshake
completes, life is good. TCP starts to send a packet, putting a
Furthermore... Larger frames would be nice if all hosts supported them,
but the problem is that the that most end hosts cannot and probably will
not ever support so called jumbo frames. What does having 9K ethernet
frame support at a NAP get us? If one end of the connection, all those
1500 MTU end hosts, don't support large frames fragmentation may have to
occur somewhere and that would probably be worse. With fragmentation,
the performance problem is pushed closer out to the edge and the edge is
probably where the performance benefit is needed so it could be a step
in the wrong direction.
Perhaps the one good approach to jumbo frames is to make use of the
networking layer and ensure hosts are doing Path MTU discovery to avoid
fragmentation.
You want to at least support 1600 or so, "baby jumbos" so people can
tunnel stuff without fragmentation if they like to. If you support 1600,
you're very likely to be able to support 4470 or larger, so why not do
that? Anyhow, you're removing one more reason why some people can say
"ethernet is lame for NAPs".
If NAPs do not support jumbos, then end systems will never support them.
If all internet backbone infrastructure supports larger frames, then there
is at least a possibility that end systems will in the long run.
A lot of servers are being connected via GigE nowadays, and with jumbos
being a possibility, why not support it?
If NAPs do not support jumbos, then end systems will never support them.
Many end systems will never support jumbo frames, period. There are
lots of 10/100 Mb/s ethernet hosts that will not ever be upgraded.
A lot of servers are being connected via GigE nowadays, and with jumbos
being a possibility, why not support it?
Certainly. In a nutshell, it might be best to take steps to avoid
fragmentation elsewhere in the network. Perhaps a rule of thumb that
should be stressed is to use jumbo frames if you know for sure the other
end system(s) support it, otherwise default to 1500.
Furthermore... Larger frames would be nice if all hosts supported them,
but the problem is that the that most end hosts cannot and probably will
not ever support so called jumbo frames.
WindowsNT/Windows2000 [1] and a lot of UNIX servers/hosts do support
9K frames today. Most GigE PCI NIC cards support them. Most of
the commodity GigE ASICs support the 9K MTU. You are correct that
not all hosts/servers support them today. In any event, any size
of jumbo Ethernet frame will only work over an all-switched layer-2
network. My guess is that the trend over time will be for more and
more hosts to support the ~9K MTU. YMMV.
What does having 9K ethernet frame support at a NAP get us?
Some folks run their end-to-end network with a 9K MTU. So having
it at the NAP means they avoid potential fragmentation in their
network. Certainly my employer would prefer a WAN network provider
that supported the 9K MTU because it would improve NFS performance
among our several sites as compared with a smaller end-to-end MTU.
Perhaps the one good approach to jumbo frames is to make use of the
networking layer and ensure hosts are doing Path MTU discovery to avoid
fragmentation.
Path MTU Discovery is curiously controversial in some circles.
My own experience is that PMTUD works well today (not necessarily
true 5 years ago). So I agree that ensuring Path MTU Discovery is
deployed is generally clever. Past experience is that many vocal
folks will disagree with this view.
[1] Someone at Microsoft has told me that use of ~9K frames is how
Microsoft got their high WinNT network throughput for the SuperComputing
conference demo a few years back. I'm also told the POS links used
in that demo had also been configured for a ~9K MTU and worked fine.
For IP traffic that is traversing more than one layer-2 network,
the variety in network technologies (e.g. ATM, SMDS, POS, Radio,
Satellite, other) means that even ~1500 byte IP frames might not
always work end-to-end. For example, I know of several commercial
IP/SATCOM systems that have an MTU of 576 bytes.
Many ISPs, not all, seem to try to engineer an end-to-end MTU
of 4470 (that number chosen for historical reasons relating to FDDI).
Some ISPs use a smaller MTU number and some use a higher MTU number.
Certainly. In a nutshell, it might be best to take steps to avoid
fragmentation elsewhere in the network. Perhaps a rule of thumb that
should be stressed is to use jumbo frames if you know for sure the other
end system(s) support it, otherwise default to 1500.
Or just use Path MTU Discovery.
Ofcourse, for path MTU discovery to work, a router must know that the
outgoing link has a smaller MTU than the packet so that it can send
back an ICMP error to the sender.
On a GigE interface with jumbo frames where the other side on the
same ethernet might not support jumbo frames that is probably not
the case. The packet will simply get dropped and the sender will
never know why it was dropped.