What are some tools to test bandwidth perfomance? I've used iperf, but are there other tools or ways to generate traffic for testing purposes to see a links maximum capacity? Especially greater than a 100mb.
Alan
What are some tools to test bandwidth perfomance? I've used iperf, but are there other tools or ways to generate traffic for testing purposes to see a links maximum capacity? Especially greater than a 100mb.
Alan
Date: Tue, 25 Jun 2002 20:02:56 -0700
From: Alan Sato
What are some tools to test bandwidth perfomance? I've used
iperf, but are there other tools or ways to generate traffic
for testing purposes to see a links maximum capacity?
Especially greater than a 100mb.
pchar
Not terribly accurate on certain links, but I think that's the
nature of the beast. Jitter makes bandwidth measurements a
tedious and error-prone process.
Eddy
I've found IPERF to work quite well. TTCP is also great. For a commercial
solution,
you may want to look for products from companies such as IXIA.
On Tue, Jun 25, 2002 at 08:02:56PM -0700, Alan Sato mooed:
What are some tools to test bandwidth perfomance? I've used
iperf, but are there other tools or ways to generate traffic for
testing purposes to see a links maximum capacity? Especially greater
than a 100mb.
CAIDA lists them better than I can:
http://www.caida.org/tools/taxonomy/performance.xml
Though I'll editorialize: For more than 100Mbits, you're moderately out
of luck. Not many of the tools work really well over 100Mbits (or haven't
been evaluated over 100Mbits...). But
play with 'em and run 'em on a fast workstation, it'd be interesting
to hear what you find.
-Dave
Whatever happened to using NTP between sites? Works well if you have
clocks at each site. Sorta works on strat 1/2 no clock.
I think they are talking about generating an OC3s worth
of traffic. while you could fill it all up w/ ntp packets
as one method, I do not beleive it will create the desired result.
but yes, if you are wanting to measure
latency across your network or a circuit, ntp when properly
synchronized can be quite a useful tool.
- jared
That will give you latency for a specific fixed packet size, which may not be
at all correlated with actual bandwidth. You basically end up having to
assume that any jitter in the RTT is based on queues in the routers along
the way and from that extrapolate what the bandwidth was.
And of course, the first time you hit a provider that does traffic engineering
that moves NTP packets to the front of the queue so as to minimize the jitter,
the measurement becomes useless for bandwidth management...
I'm not sure what you are saying regarding filling up an OC3 with NTP, but
i do know you can calculate simple latency I believe, measurements from
normal NTP = but strategically analyzed from remote sources and correctly
configured i.e. all s/1 or s/2 with drift.
I wont' say I'm the expert at this, I'm the bb wiretap guy :), not the ntp
latency guy.......but I've been around a bit.
-M
What are some tools to test bandwidth perfomance? I've used iperf, but
are there other tools or ways to generate traffic for testing purposes to
see a links maximum capacity? Especially greater than a 100mb.
Realistically, you will need commercial hardware/software to do this
properly. Smartbits, Shomiti, are two examples (Shomiti is less than user
friendly, but the thing can do almost anything)
Hi All - Just to prove to the list's management that I am a techie too I
submit the following -
On Wed, Jun 26, 2002 at 06:18:00AM -0700, todd glassey mooed:
Oh and use something like a SNIFFER to generate the traffic. Most of what we
know of as commercial computer's cannot generate more than 70% to 80%
capacity on whatever network they are on because of driver overhead and OS
latency etc etc etc. It was funny, but I remember testing FDDI on a UnixWARE
based platform and watching the driver suck 79% of the system into the
floor.
Btw, if you've got a bit of time on your hands, the Click router
components have some extremely-low-overhead drivers (for specific
ethernet cards under Linux). They can generate traffic at pretty
impressive rates. They used them for testing DOS traffic for a while.
http://pdos.lcs.mit.edu/click/
(Most of the driver overhead you see is interrupt latency; click
uses an optimized polling style to really cram things through). Also,
the new FreeBSD polling patches should make it so you can get more
throughput from your drivers when doing tests. I understand there are
similar things for Linux.
-Dave
On Wed, Jun 26, 2002 at 06:18:00AM -0700, todd glassey mooed:
I have never been referred to as bovine before. I usually describe myself as
a small polar bear... Hmmm.
>
> Oh and use something like a SNIFFER to generate the traffic. Most of
what we
> know of as commercial computer's cannot generate more than 70% to 80%
> capacity on whatever network they are on because of driver overhead and
OS
> latency etc etc etc. It was funny, but I remember testing FDDI on a
UnixWARE
> based platform and watching the driver suck 79% of the system into the
> floor.Btw, if you've got a bit of time on your hands, the Click router
components have some extremely-low-overhead drivers (for specific
ethernet cards under Linux).
Good Point.
They can generate traffic at pretty
impressive rates. They used them for testing DOS traffic for a while.
Still there are very few parametric engines that will generate more than
100Mb/S traffic - continuously.
(Most of the driver overhead you see is interrupt latency;
Depends on which OS you are running, and what encapsulation or other
packaging/unpackaging is done in the Driver also accounts for a substantial
amount of the compute model. If those services are done mostly in HW then
the systems I have played with will give you up to about 80% capacity. And
on ethernet that is not Collision-Free (i.e.run as full duplex), then you
have to deal with the line characteristics so with both engines competing to
flood the net you may actually get less than 80% total performance...
click
uses an optimized polling style to really cram things through). Also,
the new FreeBSD polling patches should make it so you can get more
throughput from your drivers when doing tests. I understand there are
similar things for Linux.
The Linux Router Project has similar features.