cooling door

page 10 and 11 of <http://www.panduit.com/products/brochures/105309.pdf> says
there's a way to move 20kW of heat away from a rack if your normal CRAC is
moving 10kW (it depends on that basic air flow), permitting six blade servers
in a rack. panduit licensed this tech from IBM a couple of years ago. i am
intrigued by the possible drop in total energy cost per delivered kW, though
in practice most datacenters can't get enough utility and backup power to run
at this density. if "cooling doors" were to take off, we'd see data centers
partitioned off and converted to cubicles.

While the chilled water door will provide higher equipment
density per rack, it relies on water piping back to a "Cooling
Distribution Unit" (CDU) which is in the corner sitting by your
CRAC/CRAH units. Whether this is actually more efficient
depends quite a bit on the (omitted) specifications for that
unit... I know that it would have to be quite a bit before
many folks would: 1) introduce another cooling system
(with all the necessary redundancy), and 2) put pressurized
water in the immediate vicinity of any computer equipment.

/John

What could possibly go wrong? :slight_smile:
If it leaks, you get the added benefits of conductive and evaporative cooling.

Can someone please, pretty please with sugar on top, explain the point
behind high power density?

Raw real estate is cheap (basically, nearly free). Increasing power
density per sqft will *not* decrease cost, beyond 100W/sqft, the real
estate costs are a tiny portion of total cost. Moving enough air to cool
400 (or, in your case, 2000) watts per square foot is *hard*.

I've started to recently price things as "cost per square amp". (That is,
1A power, conditioned, delivered to the customer rack and cooled). Space
is really irrelevant - to me, as colo provider, whether I have 100A going
into a single rack or 5 racks, is irrelevant. In fact, my *costs*
(including real estate) are likely to be lower when the load is spread
over 5 racks. Similarly, to a customer, all they care about is getting
their gear online, and can care less whether it needs to be in 1 rack or
in 5 racks.

To rephrase vijay, "what is the problem being solved"?

[not speaking as mlc anything]

Can someone please, pretty please with sugar on top, explain the point
behind high power density?

maybe.

Raw real estate is cheap (basically, nearly free).

not in downtown palo alto. now, you could argue that downtown palo alto
is a silly place for an internet exchange. or you could note that conditions
giving rise to high and diverse longhaul and metro fiber density, also give
rise to high real estate costs.

Increasing power density per sqft will *not* decrease cost, beyond
100W/sqft, the real estate costs are a tiny portion of total cost. Moving
enough air to cool 400 (or, in your case, 2000) watts per square foot is
*hard*.

if you do it the old way, which is like you said, moving air, that's always
true. but, i'm not convinced that we're going to keep doing it the old way.

I've started to recently price things as "cost per square amp". (That is,
1A power, conditioned, delivered to the customer rack and cooled). Space
is really irrelevant - to me, as colo provider, whether I have 100A going
into a single rack or 5 racks, is irrelevant. In fact, my *costs*
(including real estate) are likely to be lower when the load is spread
over 5 racks. Similarly, to a customer, all they care about is getting
their gear online, and can care less whether it needs to be in 1 rack or
in 5 racks.

To rephrase vijay, "what is the problem being solved"?

if you find me 300Ksqft along the caltrain fiber corridor in the peninsula
where i can get 10mW of power and have enough land around it for 10mW worth
of genset, and the price per sqft is low enough that i can charge by the
watt and floor space be damned and still come out even or ahead, then please
do send me the address.

Can someone please, pretty please with sugar on top, explain
the point behind high power density?

It allows you to market your operation as a "data center". If
you spread it out to reduce power density, then the logical
conclusion is to use multiple physical locations. At that point
you are no longer centralized.

In any case, a lot of people are now questioning the traditional
data center model from various angles. The time is ripe for a
paradigm change. My theory is that the new paradigm will be centrally
managed, because there is only so much expertise to go around. But
the racks will be physically distributed, in virtually every office
building, because some things need to be close to local users. The
high speed fibre in Metro Area Networks will tie it all together
with the result that for many applications, it won't matter where
the servers are. Note that the Google MapReduce, Amazon EC2, Haddoop
trend will make it much easier to place an application without
worrying about the exact locations of the physical servers.

Back in the old days, small ISPs set up PoPs by finding a closet
in the back room of a local store to set up modem banks. In the 21st
century folks will be looking for corporate data centers with room
for a rack or two of multicore CPUs running XEN, and Opensolaris
SANs running ZFS/raidz providing iSCSI targets to the XEN VMs.

--Michael Dillon

jcurran@mail.com (John Curran) writes:

While the chilled water door will provide higher equipment
density per rack, it relies on water piping back to a "Cooling
Distribution Unit" (CDU) which is in the corner sitting by your
CRAC/CRAH units.

it just has to sit near the chilled water that moves the heat to
the roof. that usually means CRAC-adjacency but other arrangements
are possible.

I know that it would have to be quite a bit before
many folks would: 1) introduce another cooling system
(with all the necessary redundancy), and 2) put pressurized
water in the immediate vicinity of any computer equipment.

the pressure differential between the pipe and atmospheric isn't
that much. nowhere near steam or hydraulic pressures. if it gave
me ~1500w/SF in a dense urban neighborhood i'd want to learn more.

When one of the many CRAC units decides to fail in an air-cooled
environment, another one starts up and everything is fine. The
nominal worse case leaves the failed CRAC unit as a potential air
pressure leakage source for the raised-floor and/or ductwork, but
that's about it.

Chilled water to the rack implies multiple CDU's with a colorful
hose and valve system within the computer room (effectively a
miniature version of the facility chilled water loop). Trying to
eliminate potential failure modes in that setup will be quite the
adventure, which depending on your availability target may be
a non-issue or a great reason to consider moving to new space.

/John

John Curran wrote:

Chilled water to the rack implies multiple CDU's with a colorful
hose and valve system within the computer room (effectively a
miniature version of the facility chilled water loop). Trying to
eliminate potential failure modes in that setup will be quite the
adventure, which depending on your availability target may be
a non-issue or a great reason to consider moving to new space.

Actually it wouldn't have to be pressurized at all if you located a large tank containing chilled water above and to the side, with a no-kink, straight line to the tank. N+1 chiller units could feed the tank.

Thermo-siphoning would occur (though usually done with a cold line at the bottom and a return, warmed line at the top of the cooling device) as the warm water rises to the chilled tank and more chilled water flows down to the intake.

You would of course have to figure out how to monitor/cut off/contain any leaks. Advantage is that cooling would continue up to the limit of the BTUs stored in the chilled water tank, even in the absence of power.

Cordially

Patrick Giagnocavo
patrick@zill.net

Can someone please, pretty please with sugar on top, explain the point
behind high power density?

More equipment in your existing space means more revenue and more profit.

Raw real estate is cheap (basically, nearly free). Increasing power
density per sqft will *not* decrease cost, beyond 100W/sqft, the real
estate costs are a tiny portion of total cost. Moving enough air to cool
400 (or, in your case, 2000) watts per square foot is *hard*.

It depends on where you are located, but I understand what you are saying. However, the space is the cheap part. Installing the electrical power, switchgear, ATS gear, Gensets, UPS units, power distribution, cable/fiber distribution, connectivity to the datacenter, core and distribution routers/switches are all basically stepped incremental costs. If you can leverage the existing floor infrastructure then you maximize the return on your investment.

I've started to recently price things as "cost per square amp". (That is,
1A power, conditioned, delivered to the customer rack and cooled). Space
is really irrelevant - to me, as colo provider, whether I have 100A going
into a single rack or 5 racks, is irrelevant. In fact, my *costs*
(including real estate) are likely to be lower when the load is spread
over 5 racks. Similarly, to a customer, all they care about is getting
their gear online, and can care less whether it needs to be in 1 rack or
in 5 racks.

I don't disagree with what you have written above, but if you can get 100A into all 5 racks (and cool it!), then you have five times the revenue with the same fixed infrastructure costs (with the exception of a bit more power, GenSet, UPS and cooling, but the rest of my costs stay the same.)

To rephrase vijay, "what is the problem being solved"?

For us in our datacenters, the problem being solved is getting as much return out of our investment as possible.

-Robert

Tellurian Networks - Global Hosting Solutions Since 1995
http://www.tellurian.com | 888-TELLURIAN | 973-300-9211
"Well done is better than well said." - Benjamin Franklin

> Can someone please, pretty please with sugar on top, explain the point
> behind high power density?

Customers are being sold blade servers on the basis that "it's much
more efficient to put all your eggs in one basket" without being told
about the power or cooling requirements and how not a whole lot of
datacenters really want/are able to support customers installing 15
racks of blade servers in one spot with 4x 230V/30A circuits
each. (Yes, I had that request.)

Customers don't want to pay for the space. They forget that they still
have to pay for the power and that that charge also includes a fee for
the added load on the UPS as well as the AC to get rid of the heat.

While there are advantages to blade servers, a fair number of sales
are to gullable users who don't know what they're getting into, not
those who really know how to get the most out of them. They get sold
on the idea of using blade servers, stick them into S&D, Equinix, and
others and suddenly find out that they can only fit 2 in a rack
because of the per-rack wattage limit and end up having to buy the
space anyway. (Wether it's extra racks or extra sq ft or meters, it's
the same problem.)

Under current rules for most 3rd party datacenters, one of the
principle stated advantages, that of much greater density, is
effectively canceled out.

> Increasing power density per sqft will *not* decrease cost, beyond
> 100W/sqft, the real estate costs are a tiny portion of total cost. Moving
> enough air to cool 400 (or, in your case, 2000) watts per square foot is
> *hard*.

(Remind me to strap myself to the floor to keep from becoming airborne
by the hurricane force winds while I'm working in your datacenter.)

Not convinved of the first point but experience is limited there. For
the second, I think the practical upper bound for my purposes is
probably between 150 and 200 watts per sq foot. (Getting much harder
once you cross the 150 watt mark.) Beyond that, it gets quite
difficult to supply enough cool air to the cabinet to keep the
equipment happy unless you can guarentee a static load and custom
design for that specific load. (And we all know that will never
happen.) And don't even talk to me about enclosed cabinets at that
point.

if you do it the old way, which is like you said, moving air, that's always
true. but, i'm not convinced that we're going to keep doing it the old way.

One thing I've learned over the various succession of datacenter /
computer room builds and expansions that I've been involved in is that
if you ask the same engineer about the right way to do cooling in
medium and large scale datacenters (15k sq ft and up), you'll probably
get a different oppinion every time you ask the question. There are
several theories of how best to hand this and *none* of them are
right. No one has figured out an ideal solution and I'm not convinced
an ideal solution exists. So we go with what we know works. As people
experiment, what works changes. The problem is that retrofitting is a
bear. (When's the last time you were able to get a $350k PO approved
to update cooling to the datacenter? If you can't show a direct ROI,
the money people don't like you. And on a more practical line, how
many datacenters have you seen where it is physically impossible to
remove the CRAC equipment for replacement without first tearing out
entire rows of racks or even building walls?)

Anyway, my thoughts on the matter.

-Wayne

Perhaps this is apropos:

      Linkname: Slashdot | Iceland Woos Data Centers As Power Costs Soar
           URL: http://hardware.slashdot.org/hardware/08/03/29/2331218.shtml

I have not yet found a way to split the ~10kw power/cooling
demand of a T1600 across 5 racks. Yes, when I want to put
a pair of them into an exchange point, I can lease 10 racks,
put T1600s in two of them, and leave the other 8 empty; but
that hasn't helped either me the customer or the exchange
point provider; they've had to burn more real estate for empty
racks that can never be filled, I'm paying for floor space in my
cage that I'm probably going to end up using for storage rather
than just have it go to waste, and we still have the problem of
two very hot spots that need relatively 'point' cooling solutions.

There are very specific cases where high density power and
cooling cannot simply be spread out over more space; thus,
research into areas like this is still very valuable.

Matt

I have a need for a 1U that will just act as a backup (higher MX) mailserver
and, occasionally, deliver some large .iso images at under 10Mbit/Sec ....
:slight_smile: And I'm sure that there are other technically saavy users just like me
that could help you out with this "surplus" space! :slight_smile:

see http://www.vix.com/personalcolo/ for some places to host that backup MX.
(note, i have no business affiliation with any of the entities listed there.)

Matthew Petach wrote:

Can someone please, pretty please with sugar on top, explain the point
behind high power density?

Raw real estate is cheap (basically, nearly free). Increasing power
density per sqft will *not* decrease cost, beyond 100W/sqft, the real
estate costs are a tiny portion of total cost. Moving enough air to cool
400 (or, in your case, 2000) watts per square foot is *hard*.

I've started to recently price things as "cost per square amp". (That is,
1A power, conditioned, delivered to the customer rack and cooled). Space
is really irrelevant - to me, as colo provider, whether I have 100A going
into a single rack or 5 racks, is irrelevant. In fact, my *costs*
(including real estate) are likely to be lower when the load is spread
over 5 racks. Similarly, to a customer, all they care about is getting
their gear online, and can care less whether it needs to be in 1 rack or
in 5 racks.

To rephrase vijay, "what is the problem being solved"?

I have not yet found a way to split the ~10kw power/cooling
demand of a T1600 across 5 racks. Yes, when I want to put
a pair of them into an exchange point, I can lease 10 racks,
put T1600s in two of them, and leave the other 8 empty; but
that hasn't helped either me the customer or the exchange
point provider; they've had to burn more real estate for empty
racks that can never be filled, I'm paying for floor space in my
cage that I'm probably going to end up using for storage rather
than just have it go to waste, and we still have the problem of
two very hot spots that need relatively 'point' cooling solutions.

There are very specific cases where high density power and
cooling cannot simply be spread out over more space; thus,
research into areas like this is still very valuable.

The problem with "point" heating is often that the hot point is then the *intake* for other equipment. If you spread your two T1600s into 10 racks (i.e. skip 2, drop one, skip 4, drop 1, leaving two at the end) your hot point problem is much less of a concern.

If you bought 10 racks... not in a row, but SURROUNDING (in each of the rows opposite the cabinets)... Say 12 (a = vacant, b,c = T1600)

aaaa
abca
aaaa

You would be doing everyone in your datacenter a service by a) not thinking linearly and b) providing adequate sq ft space to dissipate your heat.

Deepak

Alex's point is that 5x density does not mean that the infrastructure costs
are less than 5x. At a certain point in time there is a rate of return
lower than 1.

We're so stuck thinking that costs are primarily related to square feet, but
with powering and cooling costs being the primary factors, we may be better
off thinking in terms of costs in relation to amps.

Frank