The Making of a Router

Hello Alessandro,

Any benchmarks of freebsd vs openbsd vs present day linux kern?

Inline response exist,

You can build using commodity hardware and get pretty good results.

I've had really good luck with Supermicro whitebox hardware, and
Intel-based network cards. The "Hot Lava Systems" cards have a nice
selection for a decent price if you're looking for SFP and SFP+ cards that
use Intel chipsets.

I like the supermicro as well however we have a couple of IBM x3250
with 2 pcie v3
x8 that are begging for a intel network card.

There might be some benefits in going with something like FreeBSD, but I
find that Linux has a lot more eyeballs on it making it much easier to
develop for, troubleshoot, and support. There are a few options if you
want to go the Linux route.

This is very important to consider. I would be speculating, or even
worse, expecting
the same type of community support from the BSD verse that I have been
getting from the linux community.

Option 1: Roll your own OS. This takes quite a bit of effort, but if you
have the tallant to do it you can generally get exactly what you want.

If Free/OpenBSD is ruled out, I could crack open the LFS project. You only
have to do it once right? Or maybe just reach out to the gentoo community
for a stripped version, and build outwards.

The biggest point of failure I've experienced with Linux-based routers on
whitebox hardware has been HDD failure. Other than that, the 100+ units
I've had deployed over the past 3+ years have been pretty much flawless.

SSD

Thankfully, they currently run an in-memory OS, so a disk failure only
affects logging.
If you want to build your own OS, I'll shamelessly plug a side project of
mine: RAMBOOT

http://ramboot.org/

RAMBOOT makes use of the Ubuntu Core rootfs, and a modified boot process
(added into initramfs tools, so kernel updates generate the right kernel
automatically). Essentially, I use a kernel ramdisk instead of an HDD for
the root filesystem and "/" is mounted on "/dev/ram1".

The bootflash can be removed while the system is running as it's only
mounted to save system configuration or update the OS.

I haven't polished it up much, but there is enough there to get going
pretty quickly.

Ummm, if it's ok with the community, can you kindly elaborate :). I am
not too fond of Debian since my horrible experience with Squeeze Desktop.
I would maybe like to try this using the combination of SSD, in memory, and
Gentoo?

You'll also want to pay attention to the settings you use for the kernel.
Linux is tuned as a desktop or server, not a router, so there are some
basics you should take care of (like disabling ICMP redirects, increasing
the ARP table size, etc).

Totally strip it as much as possible. If anyone has a Gentoo stripped kernel
config that they would like to share, please do :).

I have some examples in: http://soucy.org/xorp/xorp-1.7-pre/TUNING
or http://soucy.org/tmp/netfilter.txt (more recent, but includes firewall
examples).

Will definitely look into all your sites.

Also a note of caution. I would stick with a longterm release of Linux.
I've had good experience with 2.6.32, and 3.10. I'm eager to use some of
the post-3.10 features, though, so I'm anxious for the next longterm branch
to be locked in.

We are comfy with 3.4 right now...

One of the biggest advantages is the low cost of hardware allows you to
maintain spare systems, reducing the time to service restoration in the
event of failure. Dependability-wise, I feel that whitebox Linux systems
are pretty much at Cisco levels these days, especially if running
in-memory.

Really interested with the "in-memory", however, I would love to implement
it using gentoo as mentioned above.

Kind Regards,

N.

One of the biggest advantages is the low cost of hardware allows you to
maintain spare systems, reducing the time to service restoration in the
event of failure. Dependability-wise, I feel that whitebox Linux systems
are pretty much at Cisco levels these days, especially if running
in-memory.

With your guidance, I can put together a gentoo environment that will
run in memory
tailored for this purpose, and would be obliged to share it with the
community if
anyone else is interested.

N.

Not to sound rude, but if someone gives you a how-to but you don't like it (since making a router and a desktop environment are totally the same thing), you are welcome to come up with your own based on what you like instead of telling them to give you new instructions to suit your preferences.

~Seth

Oh my bad. I did not mean it like that at all! I am more that capable
of putting it
together using gentoo instead of debian (a little pedagogy goes a long way). And
if he would like, he can post the ISO on his webstie alongside the
different distro.
This is what I was leaning too...

Please don't be offended.

N.

Not to mention the fact that this "router" will require support. The build
before buy people are silly. Let the smart router guys do their thing and
use their box accordingly. When it breaks call to inform them it broke
and they will fix it.

Unless they deem that it's "outside of scope". Or they can't get anyone to
you inside of SLA[1]. Or they send someone incompetent. Or it's a problem
that's never happened before.

Diy projects are a nightmare to support.

*Everything* is a nightmare to support. A DIY project just means that
you're betting you're smarter than whoever the vendor sends to fix their
thing. Maybe it's a good bet, maybe it isn't.

I'm sure you've got plenty of horror stories of DIY project support; I've
got plenty of horror stories of vendor support. Perhaps we can get together
some day and have a story-off. <grin>

- Matt

[1] So you might get some SLA credits at some point in the future. So what?
It won't even cover your SLA payouts to your customers, let alone the lost
business and reputation.

Unless they deem that it's "outside of scope". Or they can't get anyone to
you inside of SLA[1]. Or they send someone incompetent. Or it's a problem
that's never happened before.

Amen!

*Everything* is a nightmare to support. A DIY project just means that
you're betting you're smarter than whoever the vendor sends to fix their
thing. Maybe it's a good bet, maybe it isn't.

Amen Again!

I'm sure you've got plenty of horror stories of DIY project support; I've
got plenty of horror stories of vendor support. Perhaps we can get together
some day and have a story-off. <grin>

Nightmares come in colours of green, teal, and purple!

Two things you want to do:

1) Split this into multiple boxes if you can. That makes maintaining
one component a lot easier, especially when you get to point 2, which is...

2) Redundancy/failover. Sure, it may be more expensive, but the first
time your HA failover changes a 2AM "Holy Crap" into an "Oh, bother" it
will be worth the price of admission....

Just subject-tag it so we can archive it, ok guys?

Cheers,
-- jr 'whacky weekend' a

Hi,

Here are my own benchs using smallest packet size (sorry no Linux):
http://dev.bsdrp.net/benchs/BSD.network.performance.TenGig.png

My conclusion: building a line-rate gigabit router (or a few rules ipfw
firewall) is possible on commodity server without problem with FreeBSD.
Building a 10gigabit router (this mean routing about 14Mpps) will be more
complex in present day.
Note: The packet generator used was the high-perf netmap pkg-gen, allowing
me to generate about 13Mpps on this same hardware (under FreeBSD), but I'm
not aware of forwarding tools that use netmap: There are only packet
generator and capture tools available.

The basic idea of RAMBOOT is typical in Embedded Linux development.

Linux makes use of multi-stage boot process. One of the stages involves
using an initial ramdisk (initrd) to provide a base root filesystem which
can be used to locate and mount the system root, then continue the boot
process from there.

For an in-memory OS, instead of the boot process mounting a pre-loaded
storage device with the expected root filesystem (e.g. your installed HDD),
you modify it to:

1) Create and format a ramdisk.
2) Create the expected directory structure and system files for the root
filesystem on that ramdisk.

The root filesystem includes the /dev directory and appropriate device
nodes, the basic Linux filesystem and your init program.

The easy way to do that is just to have a TAR archive that you extract to
the ramdisk on boot, better yet use compression on it (e.g. tar.gz) so that
the archive can be read from storage (e.g. USB flash) more quickly.

Today, the initramfs in Linux handles a lot more than simply mounting the
storage device. It performs hardware discovery and loads appropriate
modules. As such the Debian project has a dynamic build system for
initramfs that is run to build the initrd when a new kernel package is
installed, it's called "initramfs-tools".

You can manually build your own initramfs using the examples on the RAMBOOT
website, but the point of RAMBOOT is to make building an in-memory OS quick
and simple.

RAMBOOT instead adds configuration to initramfs-tools so that each time a
new initrd is generated, it includes the code needed for RAMBOOT.

The RAMBOOT setup adds handling of a new boot target called "ramboot" to
the kernel arguments. This allows the same kernel to be used for a normal
installation and remain unaffected, but when you add the argument
"boot=ramboot" as a kernel option to the bootloader, it triggers the
RAMBOOT process described above.

Having a common kernel between your development environment and embedded
environment makes it much easier to test and verify functionality.

The other part of RAMBOOT is that it makes use of "Ubuntu Core". Ubuntu
Core is a stripped down minimal (and they really do mean minimal) root
filesystem for Embedded Linux development. It includes apt-get, though, so
you can install all the packages you need from Ubuntu on the running system.

RAMBOOT then has a development script to make a new root filesystem archive
with the packages you've installed as a baseline. This allows for you to
boot a RAMBOOT system, install your desired packages and change system
configuration files as desired, then build a persistent image of that
install that will be used for future boots.

I also have the start of a script to remove unused kernel modules, and
other files (internationalization for example) which add to the OS
footprint.

You could build the root filesystem on your own (and compile all the
necessary packages) but using Ubuntu Core provides a solid base and allows
for the rapid addition of packages from the giant Ubuntu repository.

Lastly, I make use of SYSLINUX as a bootloader because my goal was to use a
USB stick as the bootflash on an Atom box. Unfortunately, the Atom BIOS
will only boot a USB device if it has a DOS boot partition, so GRUB was a
no-go. The upside is that since the USB uses SYSLINUX and is DOS
formatted, it's easily mounted in Windows or Mac OS X, allowing you to copy
new images or configuration to it easily.

For the boot device I make use of the on-board vertical USB socket on the
system board (typical for most system boards these days) and a low-profile
USB stick. I find the Verbatim "Store 'n' Go" 8GB USB stick ideally suited
for this as it's less than a quarter-inch high after the USB adapter.

RAMBOOT as a project is in the very early stages, so you should be
comfortable with Linux before you build a system on it. And I really feel
it's more of an example than anything at this point.

There are several advantages though:

The most common point of failure on a Linux system is the storage device
(either HDD or SSD).
The biggest bottleneck in system performance is storage IO.
Using a ramdisk eliminates both these concerns (in fact, even an Atom
system has surprisingly great performance when run using a ramdisk).

The result is that you get a very reliable, high-performance system.

The other benefit to RAMBOOT is that the root filesystem is NOT persistent.
This means that like a Cisco device, every boot of the system brings you
to a known working state OS-wise. There are hundreds of system files in a
Linux system; any one of which being modified could cause problems. For
both security and availability concerns a lot of effort is invested in
detecting changes to system files and avoiding them. With RAMBOOT the
problem is easily avoided.

A minimal system can fit within a 512MB ramdisk. But with RAM being so
cheap these days, I think even reserving up to 2GB of RAM for a ramdisk
would be fine (e.g. for a 4-8GB system).

Here is the hardware configuration I originally started RAMBOOT for in 2011
(wanted to avoid the cost of an HDD):

$326.36 (shipping included):
1U rack-mount case Supermicro CSE-502L-200B
Intel Atom D510 system board with dual Gigabit (Intel 82574L) Supermicro
MBD-X7SPA-H-O
2GB RAM
8GB low-profile USB flash drive (which will connect to the internal USB
port and be low enough to fit in the case) Verbatim "Store n Stay".

No HD; the system will boot off the 8GB flash into RAM and run the OS on a
ramdisk.

Using the RAMBOOT release that's currently up, I can build a custom Linux
in-memory OS in half a day. I can easily update packages for security
updates from the Ubuntu project and re-generate a new, updated, image in
less time than that. So the initial goal of being able to build something
useful quickly was satisfied, at least. My attention has now moved on to
building a configuration management system, similar to Vyatta or VyOS and
building a real distribution. I was going to call it "Carrier-grade Linux"
(cglinux.org), but given the momentum VyOS has I might try to help the VyOS
community instead of doing something new on my own.

For what it's worth, I'm actually working with the VyOS project to try and
incorporate some of the RAMBOOT ideas into VyOS as an install option for
in-memory only.

If you make use of RAMBOOT I would love to hear about it. :slight_smile:

Chipsets and drivers matter a lot in the 1G+ range.

I've had pretty good luck with the Intel stuff because they offload a lot
in hardware and make open drivers available to the community.

If you've sunk so much into the 10G link (or anything else, for that matter)
that you don't have a kilobuck to spare, you're probably undercapitalized to be
an ISP.

I have issue with this line of thought. Granted, a router is built
with custom ASICs and most network people understand IOS. However,
this is where the benefit of a multi-thousand buck router ends. Most
have limited RAM, so this limits the size of your policies and how
many routes can be stored and the likes. With a computer with multi
10s or 100s of gigs of RAM, this really isn't an issue. Routers also
have slow-ish processors (which is fine for pure routing since they
are custom chips but) if you want to do packet inspection, this can
slow things down quite a bit. You could argue that this is the same
with iptables or pf. However, if you just offload the packets and
analyze generally boring packets with snort or bro or whatever,
packets flow as fast as they would without analysis. If you have
multiple VPNs, this can start to slow down a router whereas a computer
can generally keep up.

... And then there's the money issue. Sure, if you're buying a gig+
link, you should be able to afford a fully spec'd out router. However,
(in my experience) people don't order equipment with all features
enabled and when you find you need a feature, you have to put in a
request to buy it and then it takes a month (if you're lucky) for it
to be approved. This isn't the case if you use ipt/pf - if te feature
is there, it's there - use it.

And if a security flaw is found in a router, it might be fixed in the
next month... or not. With Linux/BSD, it'll be fixed within a few days
(at the most). And, if your support has expired on a router or the
router is EOL, you're screwed.

I think in the near future, processing packets with GPUs will become a
real thing which will make doing massive real time deep packet
inspection at 10G+ a real thing.

Granted, your network people knowing IOS when they're hired is a big
win for just ordering Cisco. But, I don't see that as a show stopper.
Stating the scope of what a box is supposed to be used for and not
putting endless crap on it might be another win for an actual router.
However, this is a people/business thing and not a technical issue.

Also, I'm approaching this as more of a question of the best tool for
the job vs pure economics - a server is generally going to be cheaper,
but I generally find a server nicer/easier to configure than a router.

In talking about RAMBOOT I also realized the instructions are out of date
on the website.

The "ramboot" boot target script was updated since I created the initial
page to generate the correct fstab, and enable the root account, set a
hostname, etc. so you can actually use the OS until you create a new image.

I extracted the script from the initird to make it easier to grab:

http://ramboot.org/download/RAMBOOT/RAMBOOT-pre0.2/SYSLINUX/initrd/scripts/ramboot

Essentially, by adding a new "ramboot" script to
"/usr/share/initramfs-tools/scripts" along side "nfs" and "local" it
creates a new "boot=" target (since the init script looks for
"scripts/${BOOT}".

As mentioned on the website, the ramboot process needs a more complete
version of busybox (for tar archive support) and the mke2fs tool added to
"/usr/lib/initramfs-tools/bin/" so they will be available to the initrd.

Once you configure networking (see the "INSTALL/setup_network" script) you
can do an apt-get update and apt-get the packages you need from Ubuntu
12.04 LTS.

Example starting point:

apt-get install sudo
apt-get install nano
apt-get install ssh
apt-get install vlan
apt-get install bridge-utils

+1

Build-your-own routers are perfectly OK for a lab environment if you want to tinker with something, but I absolutely would not put an all-in-one box that I built myself in production. You end up combining some of the downsides of a hardware-based router with some of the downsides of a server (new attack vectors, another device that needs to be backed up, patched, and monitored, possibly getting a new collection of devices and drivers to play nicely with each other, etc).

Doing this also requires all of the people in your on-call rotation to be experienced sysadmins / server ops, in addition to being experiences network engineers / NOC ops. There are a lot of occasions with a server where 'just reboot it' can make a problem much worse.

Route servers running Linux or *BSD are another story. There are many situations where they can be extremely useful, but they are not all-in-one route server/RADIUS/VPN termination/web server/user shell boxes.

jms

On the topic of building a software router for an ISP, has anyone tried it
using OpenFlow? The idea is to have a Linux server run BGP and a hardware
switch to move the packets. The switch would be programmed by the Linux
server using the OpenFlow protocol.

I am looking at the HP 5400 zl switches as the hardware platform and
RouteFlow https://sites.google.com/site/routeflow/ to program the BGP rules.

One issue is that the HP switch will only allow a limited amount of rules
to be processed in hardware (about 4096 rules I believe). Will this be
enough to cover most of the traffic of a FTTH ISP on the fast path?

Regards,

Baldur

You want to use the switch for what ? To connect last-mile customers ? For
L3 aggregation ? You want to run the switch as an edge router with limited
BGP ? What's the exact use case you are thinking about ?

Eugeniu

You could look into Noviflow!

F.

I need a solution for everything except the last-mile customers. The
customers are connected to a Zhone PON switch. From there they will arrive
at our core switch as Q-in-Q vlans, one vlan per customer. I need a router
that will do two full routing tables for our uplinks, a number of partial
routing tables for our IX peers, IPv6 support, IPv4 proxy arp support and
the ability to handle a large number of Q-in-Q vlans. And of course I will
need two for redundancy. The uplinks, the links to edge switches and many
of the IX peers are all 10 Gbit/s links.

IPv4 proxy arp is especially important given the state of IPv4 exhaustion.
Being a new ISP in the RIPE region, we only got 1024 IPs. When we run out
of that initial assignment, we have to buy IP-addresses at a steep price.
Therefore we can not afford to give each home a full IPv4 subnet. They will
have to share the subnet with multiple other customers. This is achieved
through proxy arp on the switch.

We are an upstart and just buying the fancy Juniper switch times two would
burn half of my seed capital.

Like Nick Cameo I have seriously considered going with a Linux solution. I
know I can build it. I just don't know if I can make it stable enough or
make it perform good enough.

I am looking into an OpenFlow solution as a middle ground. It allows me to
buy cheaper switches/routers. The servers will do the "thinking" but the
actual work of moving packets is still done in hardware on the switches.
OpenFlow supports controller fail over, so I will not go down with just one
server crash. Poor performance on the servers will not affect customer
traffic directly.

Regards,

Baldur