L2 redundant VPN

Hi networking guys,

    I need some help :-). We try to find for our department reliable
solution for L2 VPN. The task is to connect two remote data centers,
each of them connected two 1Gbps lines (with link aggregation). Only IP
connectivity between data centers is available (so there is no
possibility to create circuit based on MPLS or something like that). The
basic problem is that high reliability is required, so the solution have
to be fully redundant.

The initial idea was about two OpenVPN servers in each data center + two
switches (HP E5800) joined into one logical switch via VRF. The link
failure is based on LACP packets between both data centers. The
solution works, however performance of OpenVPN is really creepy. The
maximum we were able to get from this configuration was about 100Mbps.
We expect at least 500Mbps (or more in the future).

In our thoughts then we were thinking about l2tp on some cisco/HP(H3C)
device, however there is little information about performance of that
solution and I am not sure how the failure detection would work in
redundant configuration.

Have anybody some experience with similar solution or at least any idea ?

Thanks a lot for thoughts


Can you enable aes-ni on your openvpn servers? Any newer intel xeon
chipset should support it, but it is usually disabled (bios) by default.

There are more tuning tips at http://community.openvpn.net/openvpn/wiki/Gigabit_Networks_Linux

Alternatively, just disable encryption by using "--cipher none" if you only care about the L2 bridging and don't care about the encryption aspect. You should get a huge performance boost through the tunnel and it would be the same thing as dropping a dedicated circuit in there.

Of course, encryption is generally a Good Thing(tm), and the AES-NI stuff is phenomenal, but it's not necessarily required in places where you're just trying to get a link set up between 2 sites and you were considering MPLS anyways.

- Pete

Just throwing out another option --
if you can do 9K MTUs over the provider's IP connection (or even 4K
for that matter), you might want to look at deploying an MPLS overlay.
Imagine having a pair of MPLS PEs at each datacenter, so four in
total. Then link those in a mesh using GRE tunnels over the existing
IP transport. Run MPLS over these four boxes and build L2 pseudowires
across. Here's a really basic config of this:
https://w.ntwk.cc/working-on-atompls/. For lab testing, a pair of
3725s on 12.4T will do the trick.


Run MPLS over these four boxes and build L2 pseudowires across

Using link bundling and one router at each end has faster convergence and
it's cheaper, you can do l2tpv3 if you can't have mpls