Iperf or Iperf like test points?

Initial caveat: If this is off topic please just let me know now and please suggest a good forum for this type of question/discussion.

The main premise of the question/discussion is the ability to establish/utilize random (or not so) test points (like looking glasses) scattered across the planet to gauge real time bandwidth performance on larger broadband connections.

Second caveat: I have done a bit a research trying to figure this out before posting and have had no luck to this point. If this topic has already been covered in depth or otherwise please advise.

With that said, the problem that I am facing is that there are no consistently reliable tools that NetOps (or end users for that matter) can use to truly evaluate bandwidth performance on large pipes.

Ex: All of the test sites that I have tried from a 100M/FD attached Linux box, riding a GigE backbone to multiple GigE transit lines typically yields BW test results in the 3-7Mbps range. Yet when I Iperf across the backbone I get more reasonable results of between 80-90Mbps TCP.

The extent of the problem is that I hand off 10M - GigE connections to my end users and they want a way to test it that is 'Off-Net'. My on-net test platforms give them great results, however since they are on-net the end users dismiss the results (thinking they are fixed I guess).

To date I have not found a reasonable method of accomplishing this.

Now with the understanding that bandwidth testing at these rates (or any rate for that matter) can prove to be a complete waste of bandwidth simply to give someone a warm and fuzzy, I am willing to let that one go on a case by case basis to simply make the end user happy.

That being said, is anyone on this list aware of such a formation of Iperf nodes across the net connected at GigE or better to accomplish this goal? If not I would be willing to start one and give up a server or two and some of my bandwidth to help others out who are probably experiencing (or have experienced) this type of problem in the past.

This issue is just burning up a lot of my tech supports time trying to educate the end users. I just feel that a cooperative effort that yields more accurate and consistent results may be a better way to approach this.

Thank you in advance.

The main premise of the question/discussion is the ability to
establish/utilize random (or not so) test points (like looking
glasses) scattered across the planet to gauge real time bandwidth
performance on larger broadband connections.

If you are going to do this wouldn't a tool like
pathchirp be a better idea?
http://www.spin.rice.edu/Software/pathChirp/

--Michael Dillon

With that said, the problem that I am facing is that there are no
consistently reliable tools that NetOps (or end users for that
matter) can use to truly evaluate bandwidth performance on large pipes.

Ex: All of the test sites that I have tried from a 100M/FD attached
Linux box, riding a GigE backbone to multiple GigE transit lines
typically yields BW test results in the 3-7Mbps range. Yet when I
Iperf across the backbone I get more reasonable results of between
80-90Mbps TCP.

  So your limiting factor is likely not the network. I typically
use the 'udp' iperf test when we've been involved with customers
that don't believe they're getting the right bw.

  Here's why:

  1) Some host TCP stacks are old/broken
  2) If there is loss, TCP will be butt-slow, the UDP results
immediately show you the loss.
  3) It removes the system memory/disk from the list of things
to be concerned with. Many people try to test with FTP or HTTP and
get poor performance, we've been able to consistenly show them our
network performs correctly.
  4) bandwidth test sites don't have a farm of fe/ge connected hosts
lying around waiting to be hammered to death.

The extent of the problem is that I hand off 10M - GigE connections
to my end users and they want a way to test it that is 'Off-Net'. My
on-net test platforms give them great results, however since they are
on-net the end users dismiss the results (thinking they are fixed I guess).

  We've not had trouble with customers understanding that we can
only control our network.

To date I have not found a reasonable method of accomplishing this.

That being said, is anyone on this list aware of such a formation of
Iperf nodes across the net connected at GigE or better to accomplish
this goal? If not I would be willing to start one and give up a
server or two and some of my bandwidth to help others out who are
probably experiencing (or have experienced) this type of problem in the
past.

  We have hosts scattered around our network that we use for
iperf testing with customers when there are troubles. Most are
fe connected, but some are ge. We'd rather not see the short
bursty 100m flows across our network unless we're aware of them as
it can easily throw off some of the stats, and also look like a DoS.

This issue is just burning up a lot of my tech supports time trying
to educate the end users. I just feel that a cooperative effort that
yields more accurate and consistent results may be a better way to
approach this.

  We've seen the same issue. People just don't get it and
we've spent a lot of time educating customers.

  - jared

There's one more to add to the list, and it is typically the most
common problem for paths over the wide area: window size must be
adjusted corresponding to the bandwidth*delay product. e.g. if you have
a 20 msec RTT and a 32KByte window, you won't be able to do any better
than ~13 Mbit/sec.