Linux router network cards

I'm looking around for networking cards to build a linux based
router. It needs to be able to do XDP, multiqueues, have good in-kernel
driver support and be able to handle 10Gbe with good offloading for
dealing with high packets per second.

What features should I be looking for to really optimize things for a
three transit setup, with full tables.

Something like the Intel XL710-QDA2 card maybe?

Hi Micah,

Take a look at the Mellanox ConnectX 5 series of cards. They handle DPDK, PVRDMA (basically SR-IOV that allows live migration between hosts), and can even process packets within the NIC for some models. They did a fantastic presentation at AusNOG 2019 which showed off a lot of the features. We tried some out with Vmware and could get 20Gbps throughput (limited by the 2x 10G NICs we had configured) to a VM running Linux with DPDK+VPP.

The slidedeck for the presentation is here:
https://www.ausnog.net/sites/default/files/ausnog-2019/presentations/1.9_Rhod_Brown_AusNOG2019.pdf

It's heavily targeting virtualised workloads but some of the feature sets apply to bare-metal uses too.

Regards,
Philip Loenneker | Senior Network Engineer | TasmaNet

Plus Mellanox introduced the SwitchDev capability which provides for offloading flow management to the hardware.

I wonder if they are going to get CUDA cores on the next version since they are owned by NVIDIA now. That would be a powerful little package.

Hi micah,

I think this was shared in the past and may be useful with regards to what you expect in terms of performance: https://blog.apnic.net/2020/04/30/how-to-build-an-xdp-based-bgp-peering-router/ .

BR,

Marinos

Thanks for the reply.

Philip Loenneker <Philip.Loenneker@tasmanet.com.au> writes:

Take a look at the Mellanox ConnectX 5 series of cards. They handle
DPDK, PVRDMA (basically SR-IOV that allows live migration between
hosts), and can even process packets within the NIC for some

From what I can tell, SR-IOV/PVRDMA aren't really useful for me in

building a router that wont be doing any virtualization.

If the card can do DPDK, can it do XDP?

The slidedeck for the presentation is here:
https://www.ausnog.net/sites/default/files/ausnog-2019/presentations/1.9_Rhod_Brown_AusNOG2019.pdf

It's heavily targeting virtualised workloads but some of the feature sets apply to bare-metal uses too.

Yeah, this wont be a virtualized environment, just a router passing
packets, dropping them, handling bgp and collecting flows.

Chelsio cards are probably what you are looking for.

https://www.chelsio.com/terminator-6-asic/

It's closer to an asic than a traditional nic as the router/firewall rules
are pushed directly into the hardware.

I don't know how good they are with linux and they seem to be compatible.
https://www.chelsio.com/linux/

You will need to mess around a bit and fiddle here and there. If you don't
mind using FreeBSD instead of linux, you could achieve a smoother and more
integrated experience.

Jean

I use DANOS with Intel XL710 10G NICs in DPDK mode for linux based routing.

If you’re doing routing protocols, allocate 2 CPU cores to the control plane and then a CPU core per 10G/1G interface for the dataplane, plus an extra core for good measure. So for a 4 x 10G router taking in full routes, 2 cores for control plane, 5 cores for the dataplane. Those cores should be Intel Xeon E5-2600v3/4 or newer and faster the clocks, the better.

Similar CPU core allocations if you choose TNSR.

micah anderson <micah@riseup.net> writes:

Thanks for the reply.

Philip Loenneker <Philip.Loenneker@tasmanet.com.au> writes:

Take a look at the Mellanox ConnectX 5 series of cards. They handle
DPDK, PVRDMA (basically SR-IOV that allows live migration between
hosts), and can even process packets within the NIC for some

From what I can tell, SR-IOV/PVRDMA aren't really useful for me in
building a router that wont be doing any virtualization.

If the card can do DPDK, can it do XDP?

The Connect-X 5 has excellent XDP support - it's what we used for the
XDP paper:

-Toke

Hi Jared,

This project looks very interesting.

Can you share with us which software or package do you use in DANOS for routing? Is it a kind of command wrapper on top of FRR?

Also, it seems stable, but I am sure you already faced some minor or weird bugs. How is the support handle with DANOS? Is it community driven?

Thanks for sharing

Jean

DANOS is a full Network Operating System https://www.danosproject.org/ managed by the Linux Foundation so it is open source. AT&T is the main contributor and consumer of it so far. It evolved from Vyatta, the linux NOS that has been around and passed ownership through a couple companies since the early 2000s. It is using FRR for the routing control plane and the Danos CLI wraps it and the other software packages together (VPNs, DNS, DHCP, CGNAT, etc) into a single config file to manage. Support for the opensource version is very responsive via github, Atlassan issue tracker, and the Matrix chat room.

If you want a commercially supported version, you can go with Danos Vyatta Edition from IP Infusion. It uses IPI’s routing engine control plane instead of FRR but is very similar.

In addition to Jared’s advice, I would recommend calculating PCI-Express bandwidth bus points for whatever platform one is using.

For instance using the Intel X710-DA4, which could be capable in a maximal scenario of 80Gbps of traffic, ensure it’s in at least a PCI-E 3.0 x4 slot. And calculate the total number of PCI-E 3.0 x1 (or PCI-E 4.0 if a very new system) lanes that exist and are connected to the CPU. Big difference between some options for Ryzen and Threadripper vs Intel CPUs, towards the lower end of the cost range.

With recent Linux kernels if you have an Intel 510 or 710 series two or four port card in a slot that can’t support its full capability, you’ll get a warning in dmesg at boot time.

And do not use an Intel CPU.

Intel only has 4x PCIe lanes that are shared out into whatever configuration they claim to have and are totally unsuitable for use in a computer that actually has to be able to do high-speed I/O.

❦ 24 octobre 2020 09:55 -06, Keith Medcalf:

And do not use an Intel CPU.

Intel only has 4x PCIe lanes that are shared out into whatever
configuration they claim to have and are totally unsuitable for use in
a computer that actually has to be able to do high-speed I/O.

That's likely to be incorrect. Intel CPU usually have 48 lanes for the
Skylake generation. The 4 lanes limitation only applies to what is
connected over DMI to the PCH, which is usually used for low-bandwidth
stuff (1G NIC, SATA, 1x PCIe slots). Look at your motherboard manual to
check how many lanes are affected to each component.

Not true, Intel Xeon E5 v3/4 have many more lanes than that. For example, this one has 40 https://ark.intel.com/content/www/us/en/ark/products/91754/intel-xeon-processor-e5-2680-v4-35m-cache-2-40-ghz.html

Put two in the system and thats 80 lanes of PCIe Gen3.

Even newer procs in the bronze/silver/gold/platinum lineup have more lanes.

If building a lower end/low cost router this is absolutely a consideration. In single socket regular ATX form factor, and products in the price range of $165 for a motherboard and $250-400 price range for a CPU.

Comparing the PCI-E lanes available on an Intel Core i7 series to something AMD zen/zen2 based (Ryzen), the AMD has greatly more. Some of the Intel single socket core i5/i7 products have just enough PCI-E lanes for their own onboard gigabit NIC and one PCI-E 3.0 x16 GPU for gaming purposes.

Would absolutely be a consideration if trying to build something with 8 to 12 10GbE interfaces capable of bursty traffic, but not flows and traffic levels that would require line rate on all ports simultaneously.