RE: Forcasts, why won't anyone believe them?

[In the message entitled "RE: Forcasts, why won't anyone believe them?" on Jan 16, 19:49, Roeland Meyer writes:]

The assumption is wrong. A server motherboard and CPU draws the same power
regardless of what it's doing. Lap-tops and the like are different and you
actually pay extra for that design. Most circuts these days are nmos
technology. Only in cmos does the power go up with the frequency. Peripheral
usage, like disk drives, are also constant since the largest power draw goes
to keeping them spinning. The seek mechnics are trivial. Floppy drives and
cd-rom drives are different. But, most servers do not keep those spinning
constantly. Ergo, for all intents and purposes, servers are a constant power
draw. They can be rated.

Uh - I think you will find that disk drive actually have between two and
"many" modes in how they consume power.

The largest amount of power is typically drawn on startup (but this depends
on the drive, of course). Power also goes up radically on seeks, because
they are not trivial operations on fast-seek drives.

Other modes include write-behind, and cache prefill, which may draw
additional power during activation. Some drives also require more power
during certain rare operations (like initial servo write).

Someone else has already pointed out that all modern equipment uses CMOS,
and the externaly clocked components have several modes (during refresh, for
example, the DRAM draw *much* more power than when in idle).

So, server power most definately is a combination of what they are doing,
and how much they are doing it. It's almost never constant (except when a
UNIX server is in the idle loop).

What's more amusing, however, is trying to measure it on the AC side. Since
most power supplies are switch-mode, with poor power factors, standard
off-the-shelf techology will be anywhere from wildly wrong to totally