Here is something which has been troubling my mind lately.
The basic question is, how do I turn an in and an out into
an aggregate? Here is a scenario: I measure usage on a
customer's T1 at my router port. Every 5 minutes a sample
the port and figure the average bps both in and out for the
last five minutes. Now, I want to make a number that
represents the aggregate usage across that port but, I cannot
simply add the two ins and outs (as I am doing now) because
they were both averages over a period of time. Doing this
produces some strange effects which can show nice trends but
also lies about the real usage. Perhaps I simply lack the
math experience to do this correctly but I can't see a good
way to do it.
If you're doing this for accounting (i.e. money collection) purposes, I see
a cause for customers to be very alarmed. I've seen a lot of attacks lately
that were ping floods or other such traffic, frequently originating from
RFC1918 addresses. I know if my upstream charged by the packet, I'd be all
over them to deduct from my bill EVERY packet I'm not interested in. Could
cause providers a lot of trouble.
Even charging for packets a site transmits is potentially at risk in such
cases, if the systems send back ICMP unreachables or other rejection packets.