Validating multi-path in production?

Hello all.
Over time, we’ve run into occurrences of both bugs and human error, both in our own gear and in our partner networks’ gear, specifically affecting multi-path forwarding, at pretty much all layers: Multi-chassis LAG, ECMP, and BGP MP. (Yes, I am a corner-case magnet. Lucky me.)

Some of these issues were fairly obvious when they happened, but some were really hard to pin down.

We’ve found that typical network monitoring tools (Observium & Smokeping, not to mention plain old ping and traceroute) can’t really detect a hashing-related or multi-path-related problem: either the packets get through or they don’t.

Can anyone recommend either tools or techniques to validate that multi-path forwarding either is, or isn’t, working correctly in a production network? I’m looking to add something to our test suite for when we make changes to critical network gear. Almost all the scenarios I want to test only involve two paths, if that helps.

The best I’ve come up with so far is to have two test systems (typically VMs) that use adjacent IP addresses and adjacent MAC addresses, and test both inbound and outbound to/from those, blindly trusting/hoping that hashing algorithms will probably exercise both paths.

Some of the problems we’ve seen show that merely looking at interface counters is insufficient, so I’m trying to find an explicit proof, not implicit.

Any suggestions? Surely other vendors and/or admins have screwed this up in subtle ways enough times that this knowledge exists? (My Google-fu is usually pretty good, but I’m striking out - maybe I’m using the wrong terms.)

-Adam

LAG - Micro BFD (RFC7130) provides per constituent livability. MLAG is much more complicated (there’s a proposal in IETF but not progressing), so LACP is pretty much the only option.
ECMP could use old/good single hop BFD per pair.
Practically - if you introduce enough flows with one of the hash keys monotonically changing, eventually you’d exercise every path available;
on itself would not help for end2end testing, usually integrated with a form of s/net flow to provide “proof of transit.
Inband telemetry (chose your poison) does provide basic device ID it has traversed as well as in some cases POT.
Finally - there are public Microsoft presentations how we use IPinIP encap to traverse a particular path on wide radix ECMP fabrics.

Cheers,

Jeff

Add RFC5837 to your RFPs.

If the goal is to test that traffic is being distributed across multiple links based on traffic headers, then you can definable roll your own. I think the problem is orchestrating it (feeding your topology data into the tool, running the tool, getting the results out, and interpreting the results etc).

A coupe of public examples:
https://github.com/facebookarchive/UdpPinger
https://www.youtube.com/watch?v=PN-4JKjCAT0

If you do roll your own, you need to taylor the tests to your topology and your equipment. For example, you can have two VMs as you mentioned, each at opposite ends of the network. Then, if your network uses a 5-tuple for ECMP inside the core for example, you could send many flows between the two VMs, rotating the sauce port for example, to ensure all links in a LAG or all ECMP paths are used.

It’s tricky to know the hashing algo for every type of device you have in your network, and for each traffic type for each device type, if you have a multi vendor network. Also, if your network carries a mix of IPv4, IPv6, PPP, MPLS L3 VPNs, MPLS L2 VPNs, GRE, GTP, IPSEC, etc. The number of permutations of tests you need to run and the result sets you need to parse, grows very rapidly.

Cheers,
James.

The problem I’m looking to solve is the logical opposite, I think: I want to demonstrate that no links are malfunctioning in such a way that packets on a certain path are getting silently dropped. Which has some “proving a negative” aspects to it, unfortunately.

I think the only way I can demonstrate it is to determine that every single multi-path/hashed-member link is working, which is… hard. Especially if I need to deal with the combinatoric explosion - I think I can skip that part.

-Adam

If your ECMP hashing algorithm considers L4 data I can recommend giving the TCP mode of the standard Linux MTR package a try. While the destination port remains a constant (iirc it defaults to port TCP/80) each iteration will use a different TCP source port, thereby introducing sufficient entropy to see if you get packetloss on a given amount of links in an ECMP heavy forwarding path. This always worked wonders for me back in the day when hunting down a broken port in a pair of 5x10G LACP bundles, e.g. 10 different possible paths, or when trying to find the rotten switching fabric in a chassis from a vendor with less than stellar debugging capabilities. Do keep in mind you need to keep the MTR running for a longer period of time to get a statistically significant amount of data to conclude anything from the percentages, e.g. let’s say 10 minutes is better than 1 minute.

Best regards,
Martijn

It sounds like you want something like this:

https://github.com/facebookarchive/fbtracert

We have an internal tool that works on generally similar principles, works pretty well.

( I have no relationship with Facebook; I just always remember their presos on UDPinger and FBTracert from my first NANOG meeting for whatever reason. :slight_smile: )

Not sure if this has changed, but the last time I looked into it, Micro BFD's for LAG's was only supported and functional on point-to-point Ethernet links.

In cases where you are running a LAN, it did not apply.

We gave up running BFD on LAG's on LAN's, because of this issue.

Mark.