Internet Edge Router replacement - IPv6 route table size considerations

have you looked into juniper networks?

I did look at a Juniper J6350, and the documentation states it can handle 400k routes with 1GB of memory, or 1 million with 2GB. However it doesn’t spell out how that is divvyed up between the two based on a profile setting or some other mechanism.

Chris

I did look at a Juniper J6350, and the documentation states it can handle 400k routes with 1GB of memory, or 1 million with 2GB. However it doesn’t spell out how that is divvyed up between the two based on a profile setting or some other mechanism.

It's a software router so the short answer is "it isn't"

With 3GB of RAM both a 4350 and 6350 can easily handle multiple IPv4
feeds and an IPv6 feed (3GB just happens to be what I have due to
upgrading from 1GB by adding a pair of 1GB sticks)

If you need more then ~500Mbit or so then you would want something
bigger. The MX80 is nice and has some cheap bundles at the moment; it's
specced for 8M routes (unspecified, but the way Juniper chips typically
store routes there's less difference in size then the straight 4x)

From others the Cisco ASR1k or Brocade NetIron XMR (2M routes IIRC) are

the obvious choices.

And I meant Brocade NetIron CES here.

Our Brocade reps pointed us to the CER 2000 series, and they can do up to 512k v4 or up to 128k v6. With other Brocade products they spell out the CAM profiles that are available, however I haven't found specifics on the CER series.

Chris

CER features are here:

http://www.brocade.com/products/all/routers/product-details/netiron-cer-2000-series/features.page

We use both NI-CERs and NI-XMRs for less than 175k. Work with a rep.
Don't go by list. The price depends on quantity and configuration. So
less than 175k could mean 80k or 500k for your config.

-Bret

Get a cheap J series, load it full of memory, forget about it. If you
haven't played with Juniper gear before, you will be quite pleased.

-Jack Carrozzo

MX80 is perfect for this.. 5g 10g bundles are cheap..

Get a cheap J series, load it full of memory, forget about it. If you
haven't played with Juniper gear before, you will be quite pleased.

-Jack Carrozzo

> From: Chris Enger [mailto:chrise@ci.hillsboro.or.us]
> Sent: Tuesday, March 08, 2011 5:18 PM
> To: 'jgoodwin@studio442.com.au'; 'nanog@nanog.org'
> Subject: RE: Internet Edge Router replacement - IPv6 route table
> sizeconsiderations
>
> Our Brocade reps pointed us to the CER 2000 series, and they can do up
> to 512k v4 or up to 128k v6. With other Brocade products they spell
> out the CAM profiles that are available, however I haven't found
> specifics on the CER series.
>
> Chris
> \

CER features are here:

http://www.brocade.com/products/all/routers/product-details/netiron-cer-2000-series/features.page

But, even one of the small MX80 bundles are about the price of 5 J4350s or 3 J6350s. Granted, if you need the throughput, it is very difficult to beat an MX80, particularly one of the 5g or 10g bundles.

-Randy

I think this is the point where I get a shovel, a bullwhip and head over to the horse graveyard that is CAM optimization...

-C

Well, it really isn't so bad. With Brocade FPGA gear you can change how
much CAM is allocated to different functions (but you can't do it on the
fly, it takes a reboot). I don't think these are available for the CER
series, though. The MLX or MXR can be reconfigured. The thing is that
the XMR and MLX are not ASIC-based devices, they are FPGA-based which
means the hardware can be re-wired with a code change. Personally, I
like to be able to reallocate CAM from features I am not using to
features that I am using.

And to be fair, Brocade has been improving over the past couple of
years. Now if only we could route layer 3 on MCT VLANS ...
(MCT is sort of like Arista mLAG but it is layer2 only at this point).

My experience with Foundry/Brocade which is recent, is only with the
FCX devices and I wished I had gone with something else.

No SNMP stats for virtual vlan interfaces and when asking Brocade
about it, you get told "it is too hard to program". You gotta be
kiddin me ....

Or how they do vlan configurations.

Or how a FCX stack will crash when you do jumbo frames.

No SNMP stats for virtual vlan interfaces and when asking Brocade
about it, you get told "it is too hard to program". You gotta be
kiddin me ....

Yeah, that is something that has been bugging me. No stats on ve
interfaces.

Or how they do vlan configurations.

I have complained about that, too. With Cisco you add vlans to ports,
with Brocade you add ports to vlans. Subtle difference. You can't look
at the config and very easily see which vlans are on which ports, you
have to do something like:

show vlan e 1/1/1

and parse through the output.

Or how a FCX stack will crash when you do jumbo frames.

I have been running jumbo frames with stacked FCX units, no problems so
far. Running 7.2.00

> Or how they do vlan configurations.

I have complained about that, too. With Cisco you add vlans to ports,
with Brocade you add ports to vlans. Subtle difference. You can't look
at the config and very easily see which vlans are on which ports, you
have to do something like:

Extreme does the same. It has the great advantage that a trunk port
doesn't magically allow all VLANs - which is an absolutely horrible
default for Cisco in the SP case.

Steinar Haug, Nethelp consulting, sthaug@nethelp.no

This is with code 07.2.00aT7f3. Had two units stacked together,
rebooted/power cycled at least once and it worked. Next time we
had to power cycle due to a bad config apply, second unit came
back and as soon it would join the stack, it crashed.

Brocade wanted us to remove it from the stack (remotely) and/or
disabled jumbo frames.

I can agree with no allowing all vlans by default, but Brocades
way is just broken imho.

The classic problem with any sort of FIB optimization is that you
can't optimize every figure on the spec sheet at once, at least not
without telling lies to your customers! You can have more compact
structures which require more memory accesses and clock cycles to
perform look-ups, or you can have bigger structures which improve
look-up speed at the expense of memory footprint. Since the market is
pretty much used to everything being advertised as "wire speed" now,
in order to continue doing look-ups at wire speed with an
ever-increasing number of routes in the FIB and with entries having
longer bit masks, you need more silicon -- more parallel look-up
capability, faster (or parallel) memory, or "optimizations" which may
not maintain wire speed for all use cases (cache, interleaving, etc.)

As the guy making purchasing decisions, I really care about one thing:
correct information on the spec sheet. You may have noticed that some
recent spec sheets from Cisco include little asterisks about the
number of routes which will fit on the FIB are based on "prefix length
distribution," which means, in effect, that such "optimizations" are
in effect and the box should perform at a guaranteed forwarding speed
by sacrificing a guaranteed number of possible routes in FIB.

Relating to IPv6 forwarding in particular, this produces an
interesting problem when deploying the network: the IPv6 NDP table
exhaustion issue. Some folks think it's a red herring; I obviously
strongly disagree and point to Cisco's knob, which Cisco will gladly
tell you only allows you to control the failure mode of your box (not
prevent subnets/interfaces from breaking), as evidence. (I am not
aware of any other vendors who have even added knobs for this.)

If you configure a /64, you are much more likely to have guaranteed
forwarding speed to that destination, and guaranteed number of routes
in FIB. What you don't have is a guarantee that ARP/NDP will work
correctly on the access router. If you choose to configure a /120,
you may lose one or both of the first guarantees. The
currently-available compromise is to configure a /120 on the access
device and summarize to a /64 (or shorter) towards your
aggregation/core. I see nothing wrong with this, since I allocate a
/64 even if I only configure a /120 within it, and this is one of the
driving reasons behind that decision (the other being a possible
future solution to NDP table exhaustion, if one becomes practical.)

The number of people thinking about the "big picture" of IPv6
forwarding is shockingly small, and the lack of public discussion
about these issues continues to concern me. I fear we are headed down
a road where the first large IPv6 DDoS attacks will be a major wake-up
call for operators and vendors. I don't intend to be one of the guys
hurriedly redesigning my access layer as a result, but I'm pretty sure
that many networks will be in exactly that situation.

If you configure a /64, you are much more likely to have guaranteed
forwarding speed to that destination, and guaranteed number of routes
in FIB. What you don't have is a guarantee that ARP/NDP will work
correctly on the access router. If you choose to configure a /120,
you may lose one or both of the first guarantees. The
currently-available compromise is to configure a /120 on the access
device and summarize to a /64 (or shorter) towards your
aggregation/core. I see nothing wrong with this, since I allocate a
/64 even if I only configure a /120 within it, and this is one of the
driving reasons behind that decision (the other being a possible
future solution to NDP table exhaustion, if one becomes practical.)

What I have done on point to points and small subnets between routers is
to simply make static neighbor entries. That eliminates any neighbor
table exhaustion causing the desired neighbors to become unreachable. I
also do the same with neighbors at public peering points. Yes, that
comes at the cost of having to reconfigure the entry if a MAC address
changes, but that doesn't happen often.

And this is better than just not trying to implement IPv6 stateless
auto-configuration on ptp links in the first place how exactly? Don't
get taken in by the people waving an RFC around without actually taking
the time to do a little critical thinking on their own first, /64s and
auto-configuration just don't belong on router ptp links. And btw only a
handful of routers are so poorly designed that they depend on not having
subnets longer than /64s when doing IPv6 lookups, and there are many
other good reasons why you should just not be using those boxes in the
first place. :slight_smile: