# Energy Efficiency - Data Centers

I was reasoning from the analogy that an incandescent bulb is less efficient than a LED bulb because more it generates more heat - more of the electricity goes into the infrared spectrum than the useful visible spectrum. Similar to the way that an electric motor is more efficient than a combustion engine.

It is overwhelmingly disposed of as heat, even all useful work. The amount of energy leaving a DC in fiber cables, etc is perhaps a millionth of one percent.

Even in your lightbulb example, if the light is used inside a room, it gets turned back into heat once it hits the walls.

So in a closed system, it’s all heat.

Now, power is lost before it can be used for compute/routing, mostly in power conversions. Of which there are many in most DCs. Companies like Facebook and Amazon have done a lot of work to remove excess power conversion steps, to chase better PUE (Power Unit Efficiency) and get more electricity to the computers before losing it as excess heat in voltage conversions. There’s still room for improvement here, and the power wasted here goes directly to heat before doing any other useful work.

Source: I have a C-20 HVAC license and own and operate 2 datacenters.

-Ben.

No doubt. Not trying to repeal the second law of thermodynamics.

I visited Boltzman’s grave in Vienna and this equation was on it: S=k*logW. Would not want to disturb his sleep.

Still, you should not look at how much heat you get, but how much
utility you get. Which for a lighting source would be measured in
lumens within the visible spectrum.

If you put in 300 watt of electricity into a computer server, you
will get somewhere between 290 and 299 watts of heat from the server
itself. The second largest power output will be the kinetic energy
of the air the fans in the server pushes; I'm guestimating that to
be somewhere between 1 and 10 watts (and thus my uncertainty of the
direct heat output above). Then you get maybe 0.1 watts of sound
energy (noise) and other vibrations in the rack. And finally, less
than 0.01 watts of light in the network fibers from the server
(assuming dual 40G or dual 100G network connections, i.e. 8 lasers).

Every microwatt of electricity put into the server in order to toggle
bits, keeping bits at their current value, transporting bits within
and between CPU, RAM, motherboard, disks, and so on, will turn into
heat *before* leaving the server. The only exception being the light
put into the network fibers, and that will be less than 10 milliwatts
for a server.

All inefficiencies in power supplies, power regulators, fans, and
other stuff in the server, will become heat, within the server.

So your estimate of 60% heat, i.e. 40% *non*-heat, is off by at
least a factor ten. And the majority of the kinetic energy of
the air pushed by the server will have turned into heat after just
a few meters...

So, if you look at how much heat is given off by a server compared
to how much power is put into it, then it is 99.99% inefficient.

But that's just the wrong way to look at it.

In a lighting source, you can measure the amount of visible light
given off in watts. In an engine (electrical, combustion or other-
wise), you can measure the amount of output in watts. So in those
cases, efficiency can be measured in percent, as the input and the
output are measured in the same units (watts).

But often a light source is better measured in lumens, not watts.
Sometimes, the torque, measured in Newton-meters, is more relevant
for an engine. Or thrust, measured in Newtons, for a rocket engine.
Then, dividing the input (W) with the output (lm, Nm, N) does not
give a percentage.

Similarly, the relevant output of a computer is not measured in
watts, but in FLOPS, database transactions/second, or web pages
served per hour.

Basically, the only time the amount of heat given off by a computer
is relevant, is when you are designing and dimensioning the cooling
system. And then the answer is always "exactly as much as the power
you put *into* the computer".

/Bellman