Slightly OT: Calculating HVAC requirements for server rooms

Hello All,

I am curious what formulas/equations folks use to figure out required
cooling for small datacenters in offices.

The variables I am using are the size of the room, the total amount of power
available for usage and the lightning.

Specifically, I am using the guide posted at:
http://www.openxtra.co.uk/articles/calculating-heat-load

Any other recommendations or suggestions from those folks that have done
this before?

Thank You in advance.

Cheers,
Mike

Mike Lyon wrote:

Hello All,

I am curious what formulas/equations folks use to figure out required
cooling for small datacenters in offices.

The variables I am using are the size of the room, the total amount of power
available for usage and the lightning.

Specifically, I am using the guide posted at:
http://www.openxtra.co.uk/articles/calculating-heat-load

Any other recommendations or suggestions from those folks that have done
this before?

Thank You in advance.

Cheers,
Mike

You also have to take into account the environment surrounding the data room. At my wife's work The ceiling above is only separated with a false ceiling to the metal roof above but the rest of hte spaces surrounding the room are climate controled. They ahd to significantly upsize to account for the heat load of that ceiling.

Specifically, I am using the guide posted at:
http://www.openxtra.co.uk/articles/calculating-heat-load

"Before you decide on an air conditioning unit you should commission an audit from a suitably qualified air conditioning equipment specialist or installer."

Translation: Hire a f***ing professional.

And that's exactly what you need to do. Qualified HVAC installers (with specific data center experience) will know far more than us "network types" will ever want to know about cooling. They do this for a living, and thus, know all the tiny details and odd edge cases to look for. (like looking above the drop ceiling -- that's what it's called, btw -- and seeing what's up there long before pencil meets paper (not that anyone uses paper anymore.))

You also have to take into account the environment surrounding the data room. At my wife's work The ceiling above is only separated with a false ceiling to the metal roof above but the rest of hte spaces surrounding the room are climate controled. They [had] to significantly upsize to account for the heat load of that ceiling.

Unless you are pulling air through the plenum (that space above the drop ceiling), the air up there shouldn't matter much -- there should be plenum returns up there to begin with venting the air to the surrounding plenum(s) (i.e. the rest of the office, hallway, neighboring office, etc.) However, I've seen more than enough office setups where the "engineers" planning the space completely ignore the plenum. In my current office building the static pressure pushes the bathroom doors open by almost 2". And they placed our server room directly under the building air handlers -- meaning all the air on the 3rd floor eventually passes through the plenum above my servers. (also, the sprinkler system riser room is in there.)

Bottom line, again, ask a professional. NANOG is a bunch of network geeks (in theory.) I'd be surprised if there's even one licensed HVAC "geek" on the list. ('tho I'm sure many may *know* an HVAC engineer.)

--Ricky

Ricky Beam wrote:

Specifically, I am using the guide posted at:
http://www.openxtra.co.uk/articles/calculating-heat-load

"Before you decide on an air conditioning unit you should commission an
audit from a suitably qualified air conditioning equipment specialist or
installer."

Translation: Hire a f***ing professional.

And that's exactly what you need to do. Qualified HVAC installers (with
specific data center experience) will know far more than us "network
types" will ever want to know about cooling. They do this for a living,
and thus, know all the tiny details and odd edge cases to look for.
(like looking above the drop ceiling -- that's what it's called, btw --
and seeing what's up there long before pencil meets paper (not that
anyone uses paper anymore.))

You also have to take into account the environment surrounding the
data room. At my wife's work The ceiling above is only separated with
a false ceiling to the metal roof above but the rest of hte spaces
surrounding the room are climate controled. They [had] to
significantly upsize to account for the heat load of that ceiling.

Unless you are pulling air through the plenum (that space above the drop
ceiling), the air up there shouldn't matter much -- there should be
plenum returns up there to begin with venting the air to the surrounding
plenum(s) (i.e. the rest of the office, hallway, neighboring office,
etc.) However, I've seen more than enough office setups where the
"engineers" planning the space completely ignore the plenum. In my
current office building the static pressure pushes the bathroom doors
open by almost 2". And they placed our server room directly under the
building air handlers -- meaning all the air on the 3rd floor eventually
passes through the plenum above my servers. (also, the sprinkler system
riser room is in there.)

The space above the drop ceiling is only a plenum if it's used as air
handling space opposed to ducting the returns everywhere. If it's not an
air handling space, it's not a plenum, it's just where spiders might be.
It's easier to throw grated panels in all over the place for returns in
large systems.

Now, back on topic, plus nifty graphics explaining the difference:

http://en.wikipedia.org/wiki/Plenum_cable

Bottom line, again, ask a professional. NANOG is a bunch of network
geeks (in theory.) I'd be surprised if there's even one licensed HVAC
"geek" on the list. ('tho I'm sure many may *know* an HVAC engineer.)

But yes, please, don't learn how to make your own system from what we
say here. HVAC systems are their own world. You wouldn't want an HVAC
guy designing your network just because he's seen a lot of server rooms,
would you?

~Seth

While all the below is true, I would put forward that many of us
networking types, especially those who operate their own datacenters,
generally know how to do an approximation. Afterall, if you don't have
an idea of magnitude, if you haven't done your homework, your
conversation with that professional will not go well. So it is
appropriate for someone being tasked with researching cooling for a
datacenter to learn how to do these approximations.

My $0.73. (inflations's a bear.)

-Wayne

Even an approximation is hard to make. One might think the simple math of "how much power is fed into the room" would do, but it ignores numerous factors that greatly effect the answer. I can rattle off example after example, but it's unnecessary. You'll need professionals to install the hardware, so there's no point not calling them in for a consult.

(And the elephant in the room no one has mentioned is "air flow". Cooling capacity is only half the equation. Air flow *volume* is just as important.)

--Ricky

Ricky,

You shouldn't even *have* a drop ceiling in a modern computer room.
You want the room to be as tall as practical so that the air from the
hot aisles has somewhere to go on its way back to the HVAC, other than
back through and around the cabinets.

Personally, I've found that there's a pretty wide disparity between
HVAC professionals that are capable of hooking up a CRAC and turning
it on versus HVAC professionals that understand the holistic picture
including hot aisles, cold aisles, humidity control and flow. I
wouldn't want to call in a professional without first understanding
the problem well enough to assess whether I was getting a competent
answer.

Regards,
Bill Herrin

William Herrin wrote:

You shouldn't even *have* a drop ceiling in a modern computer room.
You want the room to be as tall as practical so that the air from the
hot aisles has somewhere to go on its way back to the HVAC, other than
back through and around the cabinets.

I love my 30' ceiling. Even with all the things that are wrong with our HVAC setup, the servers survive due to that ceiling.

Personally, I've found that there's a pretty wide disparity between
HVAC professionals that are capable of hooking up a CRAC and turning
it on versus HVAC professionals that understand the holistic picture
including hot aisles, cold aisles, humidity control and flow. I
wouldn't want to call in a professional without first understanding
the problem well enough to assess whether I was getting a competent
answer.

I had this issue. They hooked up the redundant systems, but didn't bother with much else. The return feed is below the units pulling ambient air, and the cold air is injected 15+ feet above the isle behind the servers, intermixing with the hot air as it rises up the wall.

At least it works, but it could be better and changes will need to be made before I can reach 50% capacity in the racks.

Jack

And when he writes "professional", that might mean someone with more
expertise than your average commercial HVAC shop. I've seen it countless
times where general contractors do a great job on almost every aspect of the
building but fail miserably when it comes to setting aside space for network
equipment and remembering to cool that gear in the IMF rooms. Those
experiences have put me on the offensive when working with such folk -- you
literally have to stake out your ground and remind them of the cooling needs
every step along the way.

Frank

Calculating heat load in a datacenter is pretty easy. That's not the hard part.

Some comments:

I am curious what formulas/equations folks use to figure out required
cooling for small datacenters in offices.

The simplest equation to use assumes that you know how much power is going into the room.

Btu/hr = watts * 3.412

This further assumes that a typical IT load is very inefficient (which they are).. meaning that, for every watt that goes to a computer / server / router, a significant portion is converted to heat (we assume 100% for design purposes).

So, if you have a datacenter consuming 100kw, you'd need 341,200 btu/hr of cooling, or 28 tons of HVAC. Of course, there are other issues (like leakage, windows, doors, humans, lights) but these tend to be a little bit of line noise in a modern datacenter. Also outside environment (is this Quebec or is this Cuba), insulation, design delivery temperature, humidity requirements -- all play a part.

Translation: Hire a f***ing professional.

And that's exactly what you need to do. Qualified HVAC installers

Two comments on this... first of all, the last thing you want is an HVAC 'installer' to design your HVAC system in a datacenter. Secondly, if you find an HVAC engineer who *really* knows datacenter dynamics, that could be a help. But, frankly, there aren't a lot of them.

If you need some help with this, let me know. There are a significant amount of questions that need to be asked to give a qualified answer. The cooling capacity question is secondary to the delivery and extraction method.

I also submit that any good datacenter operator, who has had years of experience of trial and error, years of engineers who say they know something and don't, and had scores of contractors who say they know something and don't, is in a much better position to talk about this than a PE who designs comfort cooling systems.

"Question everything, assume nothing, discuss all, and resolve quickly."

-- Alex Rubenstein, AR97, K2AHR, alex@nac.net, latency, Al Reuben --
-- Net Access Corporation, 800-NET-ME-36, http://www.nac.net --

warning-- it's sunday. pontification alert.

"Ricky Beam" <jfbeam@gmail.com> writes:

... approximation

Even an approximation is hard to make. One might think the simple math of
"how much power is fed into the room" would do, but it ignores numerous
factors that greatly effect the answer. I can rattle off example after
example, but it's unnecessary. You'll need professionals to install the
hardware, so there's no point not calling them in for a consult.

"Always listen to experts. They'll tell you what can't be done and why.
Then do it." --Robert Heinlein

after 1993 i stopped letting HVAC people do design work for me, either at
home or at work. i've had to sign several waiver letters promising not to
sue an HVAC company if the system they built to my spec failed to perform.
astoundingly, not everyone "in the business" knows the difference between
conduction, convection, and radiation. consult, sure, but with suspicion.

same thing for power, structural, security, insurance, finance, network,
hardware, software, and legal people, many of whom have never questioned
their own assumptions nor those of their certification boards, state and
county governments, or teachers/mentors. they don't have to live with the
results ... but i do ... thus my willingness to dive deep.)

YMMV.

Below are a few snippets from recent posts, in no particular order, that had
me saying to myself "does not anyone remember an interesting alternative I
thought had come up on NANOG a few years ago?" Well maybe it was some other
list, but it is not really worth going back and looking.

It isn't quite true, or totally wise, but you can EASILY ignore HOT and COLD
isle systems, and you could even have adjacent cabinets in any row blowing
opposite directions and randoml;y facing across the isle to a cabinet in the
next row that could be facing EITHER WAY, and you need not care! (don't
really do it, BUT YOU SAFELY COULD)

No raised floors needed, unless you need to for cables, and relatively low
ceilings and ladder racks / cable trays with massive wads of cables that
would block normally required air flow being no problem at all.

And NO it isn't chilled water to leak and destroy your equipment

Oh, and at maybe only 30KW capacity per cabinet (with an extra very REAL 50%
reserve capacity) - is that enough for you...?? And no problem with random
cabinets or multiple whole rows with no equipment yet or even just turned
off until they pay their bill. Nothing freezes, and nothing roasts. Just
works.

All the pieces I snipped below are about trying to get the heat from the
cabinets to the CRAC and the cold air back with as little mixing and
possible. The more you mix, the more total air you have to circulate and the
lower your efficiency goes.

*** start of snips ***

(And the elephant in the room no one has mentioned is "air flow". Cooling
capacity is only half the equation. Air flow *volume* is just as
important.)

You shouldn't even *have* a drop ceiling in a modern computer room.
You want the room to be as tall as practical so that the air from the
hot aisles has somewhere to go on its way back to the HVAC, other than
back through and around the cabinets.

I love my 30' ceiling. Even with all the things that are wrong with our
HVAC setup, the servers survive due to that ceiling.

... versus HVAC professionals that understand the holistic picture
including hot aisles, cold aisles, humidity control and flow. I
wouldn't want to call in a professional without first understanding
the problem well enough to assess whether I was getting a competent
answer.

The return feed is below the units pulling
ambient air, and the cold air is injected 15+ feet above the isle behind
the servers, intermixing with the hot air as it rises up the wall.

At least it works, but it could be better and changes will need to be
made before I can reach 50% capacity in the racks.

*** end of snips ***

A few simplifying ground rules. An existing "dumb-ass" grade CRAC system can
be used to control humidity and to do any required fresh air changes, etc.

With cabinets of electronics, we are ONLY talking about a sensible load with
the stuff below.

We DO NOT WANT or need to deal with humidity with the system I'm referring
to. It is designed to JUST remove sensible heat VERY VERY efficiently.
Its refrigerant lines, both supply and return, are almost at room
temperature. No sweating, no dripping, no insulation really needed!

Piping from the mechanical room is designed for low pressure drop for
efficiency, but this refrigerant, R744, does a lot of cooling with very
small pipes.

The regular refrigeration systems we are used to have oil circulating with
the refrigerant and special care is needed to ensure its return to the
compressor. Also, the traditional system needs a thermostatic expansion
valve (TXV) so the evaporator has liquid refrigerant until almost the end of
the coil, but no liquid is returned to damage the compressor. A typical TXV
may have a MOPD (minimum operating pressure differential) of 100PSI. In some
cool outside weather conditions, you artificially throttle fans or even
bypass some of the condenser coils to keep your head pressure up to keep
that 100PSI MOPD so you get adequate liquid flow through the TXV and don't
starve the evaporator (which cuts its capacity and can lead to icing that
can progress across the face of some coil designs and then blocks air flow).
But keeping that 100 PSI in mild weather is inefficient, too.

Anyway, with NO oil circulation, and NO compressor in the loop at all and NO
TXV needed, you have something that is very close to a two pipe steam
heating system with a condensate return pump for the boiler. Typical of many
buildings and even some larger homes especially some years ago.

Here, however, the "BOILER" is a finned coil on the back of each of your
cabinets. This coil is fed from a local manifold via ball valves, excess
flow safety shutoff valves, and flexible metal hoses. The finned coil
equipped rear door also has its own fans. It gets raw undiluted hot air
exiting your equipment, passes it over coils loaded with a liquid
refrigerant just below room temperature and AT a system pressure that any
additional heat added will just boil off some of the liquid which will be
entrained as bubbles in the slurry. The cabinet air exiting the coil-door's
fans is AT room temperature. You have enough excess liquid R744 flowing that
you could handle a 50% overload beyond rated capacity. This excess liquid
flow costs almost nothing as the piping is all designed for low pressure
drop and it is a low head pressure centrifugal pump that keeps each
evaporator swept with enough liquid for 50% overload and yet there are NO
ISSUES at all with partial or no load cabinets. No super cold spots, etc.

Instead of the "radiator" in your house, the condenser here looks like a
classic shell and tube heat exchanger where the returning refrigerant gas
and liquid slurry is simply dumped into the shell and any entrained liquid
drops to the sump at the bottom or even to a seperate "receiver" tank below
if so equipped, and the returning gas simply is condensed on the outside
surfaces of the tubes in the tube bundle.

These tubes themselves can be cooled by chilled water that may exist in many
big buildings, or may be DX coils for any traditional refrigeration system
that comfortably can be modulated to run over a wide range of loads and
always keep the R744 just below room temperature.

There is even a simpler system these folks make that doesn't even have the
R744 pump, but depends on the "shell and tube" condenser and receiver to be
physicall enough ABOVE the cabinets that gravity adequately feeds the
cabinets (those with no heat load will simply have their coils stay full of
liquid) and only gas goes back to the condenser. This is a great way to have
you building chilled water system do the work for you and yet you can keep
water piping offset to the next room and so OUT of your datacenter (could be
on the floor above if that next room isn't high enough). This is still the
classic 2 pipe steam system but all gravity return with no condensate pump
needed.

So why isn't everyone doing this? TOO DARN EXPENSIVE! - needs competition.
Europeans selling un USA. "NOT INVENTED HERE"?

And, of course, R744 is simply CO2!

At a little over 80F liquid CO2 gets to 1000PSI. The piping for this needs
to be done by folks a bit better trained than the average refrigeration guy.
Some would be fine, but many need a little training. Thick walled copper
could be used, but I think all this datacenter CO2 stuff is being done in
welded stainless steel and any unions would be bolted flanged connections.

Properly trained datacenter staff can valve off and simply VENT (slowly) off
any remaining CO2 and remove a leaking or whatever door assembly that needs
to be changed. Normal chassis fans and if needed a floor fan in front of the
cabinet will move enough heat out of the cabinet so it keeps running and the
inlet air for a few adjacent cabinets is now a few degrees hotter but their
exit air is ALL at normal room temperature. The rest of the room isn't
impacted at all. The replacement door gets connected, refrigerant valve
cracked at one end lets in R744 to sweep out any air to a vent you open at
the other end of the coil (unlike regular refrigerants, this is LEGAL!!!)
vent plenty excess, no problem (you MUST NOT ADD AIR the the big system).
Seal it up, open bothsupply and return valves fully, and you are back in
business. Your equipment in that rack could have had its "cooling door" non
functional for weeks if necessary with NO PROBLEM (assuming the others near
by were all working).

Real world high reliability configuration would have several systems running
in parallel. Perhaps no two adjacent rows on the same system. Simple valving
could allow sections of one normally isolated system to be shared with
another or a built in spare in an emergency. There are none of the oil
return to many parallel compressors issues. You simply DO need to be sure
each running section has enough liquid CO2 to work and that could be easily
taken from an overfilled other system or the LARGE on site refrigerated bulk
thermos like storage tank typically running just under 300PSI you probably
should have available, anyway. It might also do double duty as a CO2 fire
suppression flooding system, but CO2 flooding can kill people if done wrong.

Maybe the onsite soft drink system in the cafeteria already has a bulk
delivered CO2 system that could be tapped in an emergency for some CO2.

Clickable brochures and such on the lower half of this following page will
be of interest:

  http://www.trox.co.uk/aitcs/products/CO2OLrac/index.html

They are in the USA, too.