We're looking at possibly purchasing a Internap FCP500,
everything I hear about these boxes is good. We are simultaneously
trying to decide if Cisco's Optimized Edge Routing solution (built into
the IOS) should be a consideration as an alternative? We're basically
just trying to find a solution to traffic engineering. That will A)
work, B) be supported and C) be somewhat easy to manage. I had heard
that Cisco planned to release a hardware solution based off of OER but
never got around to it due to lack of interest possibly?
I'm just looking for your opinions on the two more or less.
We have also looked at the Arbor product, but we need something
that actually can implement routing policies as well as provide
statistics on what is going on in our infrastructure.
Thanks,
-Drew
We're looking at possibly purchasing a Internap FCP500,
everything I hear about these boxes is good. We are simultaneously
I have no experience with OER, but I have had a FCP5000 for a while now. We have numerous transit links, all of which have significantly more burst capacity than we actually use (or commit to). To some extent it is a bit magical and I don't always understand how it decides the "target" traffic level for a link, but in general it works great. Since installing it we've pretty much done away with any other regular BGP changes to balance traffic. It does a good job of keeping out many transit links all below commit unless total traffic exceeds all commits.
In general, I'm skeptical that it is really providing much of a performance boost. However, it does a good job at balancing traffic levels and that is the main value we get from the product. It was basically a "fire and forget" system. Once installed, we were able to just forget about traffic engineering and only touch things when adding/removing a link (or for special situations like manually routing around bad paths).
If you'd like technical information about how it works or the potential scaling issues that can result let me know what you're interested in and I can expand a bit.
Matt Buford wrote:
We're looking at possibly purchasing a Internap FCP500,
everything I hear about these boxes is good. We are simultaneously
I have no experience with OER, but I have had a FCP5000 for a while now. We have numerous transit links, all of which have significantly more burst capacity than we actually use (or commit to). To some extent it is a bit magical and I don't always understand how it decides the "target" traffic level for a link, but in general it works great. Since installing it we've pretty much done away with any other regular BGP changes to balance traffic. It does a good job of keeping out many transit links all below commit unless total traffic exceeds all commits.
We've used the FCP500 and FCP5000 for several years now, and though we have had our ups and downs in terms of hardware failures, bugs, and capacity concerns. I must agree that it does a very good job at the basics of balancing out traffic for commits, bursting (based on cost) and minimally for performance.
If you have transit and peering, there are some things about it that don't fit real well into that model.
Hello...
<snip>
In general, I'm skeptical that it is really providing much of a performance
boost. However, it does a good job at balancing traffic levels and that is
the main value we get from the product. It was basically a "fire and
forget" system. Once installed, we were able to just forget about traffic
engineering and only touch things when adding/removing a link (or for
special situations like manually routing around bad paths).
If you'd like technical information about how it works or the potential
scaling issues that can result let me know what you're interested in and I
can expand a bit.
Can you expand a bit on how it dealt with the Level3 meltdown last
month?
Can you expand a bit on how it dealt with the Level3 meltdown last
month?
In general, it doesn't do anything (much) for this sort of thing. It does have a "blackhole detection" feature, but keep in mind how this thing works. You set a prefix length (which must be equal or more specific than what you expect to see in BGP, so we use /24 which I believe is the default). It then takes the top N prefixes (I believe N equals your model number - FCP5000 monitors your top 5000 /24's). First, it uses passive traffic sniffing to collect latency and packetloss statistics. Second, it uses policy routed traceroutes to determine how good things will be if it were to change the route to another one of your transit links. Finally, if it determines a change is needed, it injects a /24 route (using localpref to override anything you might already have) to send that /24 to the new transit link. You set a max number of advertised routes (say 15000) and it uses LIFO expiration.
So, even with blackhole detection, you're only even potentially "fixing" the issue for the top 5,000 traffic destination /24s. On future runs you can be sure a blackholed destination isn't going to be a top destination so you won't detect any more.
So, does it help? Marketing will tell you yes. In the real world, that works out to only a little bit of help. A few customers might be helped, but if someone tried to tell you it will route (everything) around a blackhole that is absolutely not what it is doing. Only a handful of /24s are lucky enough to be helped.
I can't guarantee every detail of how I said it operates is exactly right. This is just how it seems to be behaving based on what I've seen.