Hi all -
In April, Merit received funding from the Resource Allocation Committee
(RAC) to develop and evolve a prototype ISP Statistics Collection And
Reporting Facility (NetSCARF) package. The basic idea is to make it dead
easy for ISPs to collect and report data about their part of the Internet.
The end result will hopefully be widely-available Internet performance data
much as Merit produced during the operation of the NSFNET.
The NetSCARF code is easy to install, runs automatically, and consists of
four separate programs. Every fifteen minutes the collections program
queries all network nodes in parallel. (This is especially important for
large networks where the serial skew can affect the ability to correlate
data.) Nightly these raw statistics get pre-processed (cooked). The cooked
data is delivered to CGI scripts using what we think is the first Public
Domain implementation of the OpStats (rfc1856) client/server model. Finally,
ISP performance reports (based on the cooked data) are displayed on the web
via the CGI scripts. We plan on making the source and pre-compiled
executables available for each of these.
SNMP Version 1 is used to query the network nodes, although the code
includes support for the User Security (USEC) model for SNMPv2. ( The
Routing Arbiter project will use the DES Encrytion facility, but this
capability will be disabled for the public release due to export restrictions. )
During the operation of the NSFNET we found there were really only three
graphs that were widely seen as useful. The first cut of the code includes
only these three most popular graphs: System UpTime, Interface Uptime, and
the McD's chart (total packets served) by the network.
Alpha Testers?