RFC2544 Testing Equipment

Greetings all,
  
Looking for a good test set. Primary use will be testing L2 circuits
(It'll technically be VPLS, But the test set will just see L2). Being able
to test routed L3 would also be useful. Most of the sets I've seen are two
sided, A "reflector" at the remote side, And the test set in hand run by
the technician.
  
Looking to test up to 1Gb/s at various packet sizes, Measure Packet loss,
Jitter..etc. Primarily Copper, But if it had some form of optical port, I
wouldn't complain. Outputting a report that we can provide to the customer
would be useful, But isn't mandatory. Doesn't need anything fancy, Like
MPLS awareness, VLAN ID's..etc.

   Nick Olsen
Sr. Network Engineer
Florida High Speed Internet
(321) 205-1100 x106

When we had to do this once in a blue moon, we just bought a pair of old Agilent Framescopes off ebay. They worked great but we had issues getting reporting out of them. They had RJ45 and SFP on them.

JW, have you moved on to EtherSAM? That's what I'd be looking for myself.

Viavi, VeEX and EXFO all do products in this space; Viavi/JDSU and VeEX
do quite low cost handhelds with a limited feature set (with reporting
to USB sticks et al), EXFO's handheld is a bit chunkier but a bit more
capable.

I quite liked the VeEX MX100e+ and Viavi Smartclass Ethernet units,
they'll both do RFC2544. Having said that, you probably want to be
testing Y.1564 (which those boxes will both do) if you're doing turn-up
testing. Viavi and EXFO all have their own "flavours" which make things
cleverer/easier if you have a basic environment, but I've found myself
using the standards-based versions most of the time really. Lots of
options for reflectors - all the vendors have them in various guises, if
you have a VPLS setup then probably you'd go from a 1U box next to your
VPLS box through the VPLS pipe through to the endpoint.

We ended up buying a pair of EXFO FTB-1s but we're doing RFC2544 etc at
10G, so a slightly different kettle of fish.

James

JDSU make some nice ones that we use to qualify cell tower back haul. Not cheap though

I could recommend Accedian Metro NID for that purpose. Copper + SFP. L2 and L3 testing.
Pawel

If you are just testing the forwarding at layer 2 and have no budget
you can use free software and a laptop (for your copper requirement).
I've been writing this (https://github.com/jwbensley/Etherate) and we
use that as well as hardware testers.

Cheers,
James.

Cool. Seems you're using AF_PACKET, which makes it actually unique.
iperf/netperf etc use UDP or TCP socket, so UDP performance is just
abysmal, you can't saturate 1GE link with any reliability. So
measuring for example packet loss is not possible at all.

I've been meaning to write AF_PACKET based UDP sender/receiver and
have gotten pretty far with friend of mine on rust version, we can
congest 1GE (on minimum size frames) on Linux reliably and actually
tell if you're lossy. It has server/client design, where client
requests via JSON based messages through control-channel server to
receive or send, and what exactly.
Alas, we're only 80% there, and seem to struggle to find time to
polish it for initial release.

We definitely need tool like iperf, which performs at least to 1GE,
and AF_PACKET can do that, UDP socket cannot. Alas 10GE is still pipe
dream for anything as portable as iperf, as you'd need to use DPDK,
netmap or equivalent which will remove the NIC from userland, there
are quite few options for that use-case, but no good option for
use-case when you want at least 1GE but you cannot remove NIC from
userland.

Hi Saku,

Yeah AF_PACKET sockets are used and you really need to be on a 4.x
Kernel for better performance (update your NIC firmware etc). The
problem with Etherate is that is uses Ethernet for the test data and
control data and since Ethernet is loss-less is does some strange
(read: lame) things like send some control or data frames three times
to try and ensure the other side receives it when there is frame loss.

Yeah 1G with large frames is do-able. 10G with large frames is also
do-able with a fast CPU. Etherate is single threaded though so you’ll
not get anywhere near 10G with 64 byte frames in Etherate. I have
started writing a multi-threaded version which will use TCP sockets to
exchange control data but still use AF_PACKET sockets for data plane
traffic.

10G with 64 byte packets should be achievable (still writing it so not
100% confirmed yet) when using the PACKET_MMAP Tx/Rx rings in
AF_PACKET which is what the new aptly named EtherateMT
(multi-threaded) uses. One can then use multiple threads (each on a
difference CPU core) and each with its own Tx or Rx ring buffer to
push packets to the NIC and we can use RSS on the NIC and assign each
NIC Tx queue to a separate core also for processing NET_TX and NET_RX
IRQs.

So it might take 12 or 16 cores but it should be do-able in EtherateMT
still with the iperf like portability, whereas DPDK can do this on a
single core (pkt-gen and moon-gen etc). However EtherateMT would
ideally use only Kernel native features (no 3rd party libraries
required or custom Kernel complication to enable an optional modules).

Yeah Rust seems cool, it's on my "to-learn" list along with Go and
seven thousand over things so writing in C for now.

Cheers,
James.

We used VeEX for a while and had our CO Techs run around with hand-held VeEx testers and run tests from them to a VeEx loopback device.... I config'd mpls pw's between them. We don't really do this anymore... we now role out Accedian MetroNid's and MetroNode's which have a lot of this RFC2544 and Y.1731 (accedian paa) built-in...

-Aaron

Looking to test up to 1Gb/s at various packet sizes, Measure Packet loss,
Jitter..etc. Primarily Copper, But if it had some form of optical port, I
wouldn't complain. Outputting a report that we can provide to the customer
would be useful, But isn't mandatory. Doesn't need anything fancy, Like
MPLS awareness, VLAN ID's..etc.

...

if
you have a VPLS setup then probably you'd go from a 1U box next to your
VPLS box through the VPLS pipe through to the endpoint.

If you are using VPLS then you need to send 1Gbs of broadcast traffic
and see how that cripples your network and send 1Gbps of BPDUs and ARP
requests/responses etc. to see how that ruins everything, as your
customer will loop it at some point. Also to check how your PEs work
and if storm-control or similar is working.

We had an issue with a VPLS instance where different model edge PEs
had their core facing interfaces built in different ways; some had a
physical interface configured facing the core, some a sub-interface,
some an SVI/BVI/BDI etc, it turned out that device X won't tunnel PPP
packets over VPLS/pseudowires when the core facing interface is an
SVI, model Y will but not when using EVCs etc.

People usually test TCP/UDP over IPv4 which doesn't tell you much
about what your equipment/service can or can't do and how it will
fail.

Cheers,
James.