Colocation in the US.

Who's getting more than 10kW per cabinet and metered power from their colo provider?

Can you possibly email me of list w/ your provider's name... I'm looking for a DR site.

Rob

Robert Sherrard wrote:

Who's getting more than 10kW per cabinet and metered power from their colo provider?

I had a data center tour on Sunday where they said that the way they provide space is by power requirements. You state your power requirements, they give you enough rack/cabinet space to *properly* house gear that consumers that much power. If your gear is particularly compact then you will end up with more space than strictly necessary.

It's a good way of looking at the problem, since the flipside of power consumption is the cooling problem. Too many servers packed in a small space (rack or cabinet) becomes a big cooling problem.

jc

I had a data center tour on Sunday where they said that the way they provide space is by power requirements. You state your power requirements, they give you enough rack/cabinet space to *properly* house gear that consumers that

"properly" is open for debate here. It just mean their facility isn't built to handle the power-per-square-foot loads that were being asked about.

It's possible to have a facility built to properly power and cool 10kW+ per rack. Just that most colo facilties aren't built to that level.

It's a good way of looking at the problem, since the flipside of power consumption is the cooling problem. Too many servers packed in a small space (rack or cabinet) becomes a big cooling problem.

Problem yes, but one that is capable of being engineered around (who'd have ever though we could get 1000Mb/s through cat5, after all!)

drais@atlasta.net (david raistrick) writes:

> I had a data center tour on Sunday where they said that the way they
> provide space is by power requirements. You state your power
> requirements, they give you enough rack/cabinet space to *properly*
> house gear that consumers that

"properly" is open for debate here. ... It's possible to have a
facility built to properly power and cool 10kW+ per rack. Just that most
colo facilties aren't built to that level.

i'm spec'ing datacenter space at the moment, so this is topical. at 10kW/R
you'd either cool ~333W/SF at ~30sf/R, or you'd dramatically increase sf/R
by requiring a lot of aisleway around every set of racks (~200sf per 4R
cage) to get it down to 200W/SF, or you'd compromise on W/R. i suspect
that the folks offering 10kW/R are making it up elsewhere, like 50sf/R
averaged over their facility. (this makes for a nice-sounding W/R number.)
i know how to cool 200W/SF but i do not know how to cool 333W/SF unless
everything in the rack is liquid cooled or unless the forced air is
bottom->top and the cabinet is completely enclosed and the doors are never
opened while the power is on.

you can pay over here, or you can pay over there, but TANSTAAFL. for my
own purposes, this means averaging ~6kW/R with some hotter and some
colder, and cooling at ~200W/SF (which is ~30SF/R). the thing that's
burning me right now is that for every watt i deliver, i've got to burn a
watt in the mechanical to cool it all. i still want the rackmount
server/router/switch industry to move to liquid which is about 70% more
efficient (in the mechanical) than air as a cooling medium.

> It's a good way of looking at the problem, since the flipside of power
> consumption is the cooling problem. Too many servers packed in a small
> space (rack or cabinet) becomes a big cooling problem.

Problem yes, but one that is capable of being engineered around (who'd
have ever though we could get 1000Mb/s through cat5, after all!)

i think we're going to see a more Feinman-like circuit design where we're
not dumping electrons every time we change states, and before that we'll
see a standardized gozinta/gozoutta liquid cooling hookup for rackmount
equipment, and before that we're already seeing Intel and AMD in a
watts-per-computron race. all of that would happen before we'd air-cool
more than 200W/SF in the average datacenter, unless Eneco's chip works out
in which case all bets are off in a whole lotta ways.

Paul brings up a good point. How long before we call a colo provider
to provision a rack, power, bandwidth and a to/from connection in each
rack to their water cooler on the roof?

-Mike

I think the better questions are: when will customers be willing to pay for it? and how much? :slight_smile:

tv

Vendor S? :slight_smile:

tv

Speaking as the operator of at least one datacenter that was originally built to water cool mainframes... Water is not hard to deal with, but it has its own discipline, especially when you are dealing with lots of it (flow rates, algicide, etc). And there aren't lots of great manifolds to allow customer (joe-end user) service-able connections (like how many folks do you want screwing with DC power supplies/feeds without some serious insurance)..

Once some standardization comes to this, and valves are built to detect leaks, etc... things will be good.

DJ

Mike Lyon wrote:

In the long run, I think this is going to solve a lot of problems, as cooling the equipment with a water medium is more effective then trying to pull the heat off of everything with air. But standardization is going to take a bit.

I think if someone finds a workable non-conductive cooling fluid that
would probably be the best thing. I fear the first time someone is
working near their power outlets and water starts squirting, flooding
and electricuting everyone and everything.

-Mike

http://en.wikipedia.org/wiki/Mineral_oil

Brandon Galbraith wrote:

Paul Vixie wrote:

i'm spec'ing datacenter space at the moment, so this is topical. at 10kW/R
you'd either cool ~333W/SF at ~30sf/R, or you'd dramatically increase sf/R
by requiring a lot of aisleway around every set of racks (~200sf per 4R
cage) to get it down to 200W/SF, or you'd compromise on W/R. i suspect
that the folks offering 10kW/R are making it up elsewhere, like 50sf/R
averaged over their facility. (this makes for a nice-sounding W/R number.)
i know how to cool 200W/SF but i do not know how to cool 333W/SF unless
everything in the rack is liquid cooled or unless the forced air is
bottom->top and the cabinet is completely enclosed and the doors are never
opened while the power is on.

If you have water for the racks:
http://www.knuerr.com/web/en/index_e.html?products/miracel/cooltherm/cooltherm.html~mainFrame
(there are other vendors too, of course)

The CRAY bid for the DARPA contract also has some interesting
cooling solutions as I recall, but that is a longer way out.

How about CO2?

tv

If you have water for the racks:

we've all gotta have water for the chillers. (compressors pull too much power,
gotta use cooling towers outside.)

http://www.knuerr.com/web/en/index_e.html?products/miracel/cooltherm/cooltherm.html~mainFrame

i love knuerr's stuff. and with mainframes or blade servers or any other
specialized equipment that has to come all the way down when it's maintained,
it's a fine solution. but if you need a tech to work on the rack for an
hour, because the rack is full of general purpose 1U's, and you can't do it
because you can't leave the door open that long, then internal heat exchangers
are the wrong solution.

knuerr also makes what they call a "CPU cooler" which adds a top-to-bottom
liquid manifold system for cold and return water, and offers connections to
multiple devices in the rack. by collecting the heat directly through paste
and aluminum and liquid, and not depending on moving-air, huge efficiency
gains are possible. and you can dispatch a tech for hours on end without
having to power off anything in the rack except whatever's being serviced.
note that by "CPU" they mean "rackmount server" in nanog terminology. CPU's
are not the only source of heat, by a long shot. knuerr's stuff is expensive
and there's no standard for it so you need knuerr-compatible servers so far.

i envision a stage in the development of 19-inch rack mount stuff, where in
addition to console (serial for me, KVM for everybody else), power, ethernet,
and IPMI or ILO or whatever, there are two new standard connectors on the
back of every server, and we've all got boxes of standard pigtails to connect
them to the rack. one will be cold water, the other will be return water.
note that when i rang this bell at MFN in 2001, there was no standard nor any
hope of a standard. today there's still no standard but there IS hope for one.

(there are other vendors too, of course)

somehow we've got standards for power, ethernet, serial, and KVM. we need
a standard for cold and return water. then server vendors can use conduction
and direct transfer rather than forced air and convection. between all the
fans in the boxes and all the motors in the chillers and condensers and
compressors, we probably cause 60% of datacenter related carbon for cooling.
with just cooling towers and pumps it ought to be more like 15%. maybe
google will decide that a 50% savings on their power bill (or 50% more
computes per hydroelectric dam) is worth sinking some leverage into this.

http://www.spraycool.com/technology/index.asp

that's just creepy. safe, i'm sure, but i must be old, because it's creepy.

How long before we rediscover the smokestack? After all, a colo is an
industrial facility. A cellar beneath, a tall stack on top, and let
physics do the rest.

Anyway, "RJ45 for Water" is a cracking idea. I wouldn't be surprised
if there aren't already standardised pipe connectors in use elsewhere
- perhaps the folks on NAWOG (North American Water Operators Group)
could help? Or alt.plumbers.pipe? But seriously folks, if the plumbers
don't have that, then other people who use a lot of flexible pipework
might. Medical, automotive, or aerospace come to mind.

All I can think of about that link is a voice saying "Genius - or Madman?"

How long before we rediscover the smokestack? After all, a colo is an
industrial facility. A cellar beneath, a tall stack on top, and let physics
do the rest.

odd that you should say that. when building out in a warehouse with 28 foot
ceilings, i've just spec'd raised floor (which i usually hate, but it's safe
if you screw all the tiles down) with horizontal cold air input, and return
air to be taken from the ceiling level. i agree that it would be lovely to
just vent the hot air straight out and pull all new air rather than just
make up air from some kind of ground-level outside source... but then i'd
have to run the dehumidifier on a 100% duty cycle. so it's 20% make up air
like usual. but i agree, use the physics. convected air can gather speed,
and i'd rather pull it down than suck it up. woefully do i recall the times
i've built out under t-bar. hot aisles, cold aisles. gack.

Anyway, "RJ45 for Water" is a cracking idea. I wouldn't be surprised if
there aren't already standardised pipe connectors in use elsewhere - perhaps
the folks on NAWOG (North American Water Operators Group) could help? Or
alt.plumbers.pipe? But seriously folks, if the plumbers don't have that,
then other people who use a lot of flexible pipework might. Medical,
automotive, or aerospace come to mind.

the wonderful thing about standards is, there are so many to choose from.
knuerr didn't invent the fittings they're using, but, i'll betcha they aren't
the same as the fittings used by any of their competitors. not yet anyway.

All I can think of about that link is a voice saying "Genius - or Madman?"

this thread was off topic until you said that.

Seriously - all those big old mills that got turned into posh
apartments for the CEO's son. Eight floors of data centre and a 200
foot high stack, and usually an undercroft as the cold-source. And
usually loads of conduit everywhere for the cat5 and power. (In the UK
a lot of them are next to a canal, but I doubt greens would let you
get away with dumping hot water.)

Obviously convection is the best way, and I've gotten away with it a few times myself, but the usual answer to your "why not" question is "Fire codes." Convection drives the intensity and spread of fires. Which is what furnace chimneys are for. Thus all the controls on plenum spaces. But when you can get away with it, it's great.

                         -Bill

Please excuse the brevity of this message; I typed it on my pager. I could be more loquacious, but then I'd crash my car.

http://en.wikipedia.org/wiki/Fluorinert

/John