There are a lot of variables that would skew numbers in favor of using
FOSS on commodity hardware in our situation, that wouldn't necessarily
apply to others. Primarily because these are used to provide services
that are in part funded through the federal E-rate program, and need
to comply with restrictions such as CIPA.
For example, we moved from centralized web filtering using WCCP and
racks of proxy servers, to pushing that service out to the edge. That
move alone provided more savings than the hardware cost of the
project, so we actually made a net profit from the move in our
Not sure that would easily apply to anyone.
As for the OpEx and CapEx v. traditional players...
The units are engineered so that they run the entire OS on a RAM disk;
so configuration management is much like what you would find with a
traditional router (only saved configuration survives a reboot, etc --
think of it like a live distribution with controlled persistence).
A physical disk is used for logging, but does not take out the system
upon failure (we've had maybe 3 disk failures that turned out to be
thermal conditions of where equipment was installed -- boiler rooms --
and service was maintained until we had a technician out to swap the
unit). So operationally, they've been pretty much equivalent of a
Cisco solution and we haven't seen much of an increase in activity
aside from supporting the extra services that weren't previously
The skill set is a little different though. Having a strong
understanding of the internals of a Linux system along side
traditional networking skills is a must if you go in this direction.
For us, the ability to have more tools to poke at the state of the
system and troubleshoot issues (such as performing packet captures
directly on the device) has been invaluable. It has allowed us to
track down issues (such as TCP window scaling problems with unnamed
cloud services and their incorrectly configured load balancers)
remotely that would have required on-site capture in the past.
It's also provided us with the flexibility to quickly implement
operational changes as we see a need, such as implementing automatic
nightly backup of configurations to our central servers (using a
simple CRON job), or rolling out scripted changes.
Using an off-the-shelf distribution of Linux and a FOSS routing
package will probably not do the trick for you. If you take the time
to build a custom distribution that only has what you need; makes use
of known stable package versions, and is engineered to function as a
widely-deployed unit (configuration management, logging, etc) that is
where the savings will come in, because you won't need to see the
significant increase in OpEx that opponents usually point to. We were
debating on if we should do that in-house or not. I think if you're
talking about 1000 units then in makes sense to try in-house, on a
smaller scale you really want to find a partner that can engineer the
system for you.
Vyatta looks like it's addressed a lot of the issues it needs to --
though I've never used it in production -- but I would still like to
see more from them in tuning the OS to function better as a router and
less like a server. Last time I checked they didn't seem to touch
much except setting Linux to allow forwarding. I'm optimistic though.
Now we just need Intel to step up with some ASICs and open source
drivers that could be plugged into Linux. (On a side note, we make use
of some SFP PCI-X cards for our direct optical connected sites to save
money there too; working well with up to ZX SFPs).