The userbase has also increased by several orders of magnitude beyond
that.
1% "bad" traffic at 100 users: 1
1% "bad" traffic at 1,000,000 users: 10,000
-JFO
The userbase has also increased by several orders of magnitude beyond
that.
1% "bad" traffic at 100 users: 1
1% "bad" traffic at 1,000,000 users: 10,000
-JFO
Correct... Measuring reliability in terms of what's around that isn't
success
is not a valid method of measurment. One must measure the success rate.
Does anyone really believe that they are more likely to encounter a timeout
or connection drop today than 5, 10, 15, or even 20 years ago?
I think not. Generally, when you click on a valid link, you get the page.
Sometimes servers are slow, but, rarely do you run into network issues
these days. Sure, they still occur, but, they are much less frequent
than they used to be.
Owen
And this raises an absolutely excellent stand-alone point that seems to
me to be something that netops types should be keeping uppermost in
their minds:
When a problem gets big enough, it's a *different* problem,
not just a bigger problem.
An analogy to this (actually, a result of it) is the difference between
the office policies in a 5-person company and a 500-person one, or a
town of 10,000 and a city of 400,000.
This seems to bear on almost everything I've seen said in this thread
(which is the award winner for the year so far...)
Cheers,
-- jra