rack power question

Hopefully this classifies as on-topic...

I am discussing with some investors the possible setup of new datacenter space.

Obviously we want to be able to fill a rack, and in order to do so, we need to provide enough power to each rack.

Right now we are in spreadsheet mode evaluating different scenarios.

Are there cases where more than 6000W per rack would be needed?

(We are not worried about cooling due to the special circumstances of the space.)

Would someone pay extra for > 7KW in a rack? What would be the maximum you could ever see yourself needing in order to power all 42U ?

Cordially

Patrick Giagnocavo
patrick@zill.net

Obviously we want to be able to fill a rack, and in order to do so, we
need to provide enough power to each rack.

Which is the hardest part of designing and running a datacenter.

Are there cases where more than 6000W per rack would be needed?

Yes. We are seeing 10kw rack, and requests for 15 to 20kw are starting
to come in. Think blade.

(We are not worried about cooling due to the special circumstances of
the space.)

You've already lost.

Would someone pay extra for > 7KW in a rack? What would be the

maximum

you could ever see yourself needing in order to power all 42U ?

These days, there is not (or should not) be a connection between rack
pricing and what you charge for power.

As for how much, ask HP or IBM or whoever how many blades they can shove
in 42U.

Ooh. Please share. You're the first case I've seen in a *long* time where
getting the BTU's *out* of the rack wasn't more of a challenge than getting
the watts *into* the rack. Once you're talking about "datacenter" sized
spaces, those BTU's add up, and there's plenty of current spaces where the
limit isn't the power feed into the building, it's the room on the roof for
the chillers....

As you recognize, its not an engineering question; its an economic question. Notice how Google's space/power philosphy changed between
leveraging other people's space/power, and now that they own their own space/power.

Existing equipment could exceed 20kW in a rack, and some folks are
planning for equipment exceeding 30kW in a rack.

But things get more interesting when you look at the total economics
of a data center. 8kW/rack is the new "average," but that includes
a lot of assumptions. If someone else is paying, I want it and more.
If I'm paying for it, I discover I can get by with less.

Date: Sat, 22 Mar 2008 22:02:49 -0400
From: Patrick Giagnocavo

Hopefully this classifies as on-topic...

I am discussing with some investors the possible setup of new
datacenter space.

You might also try the isp-colo.com list.

Are there cases where more than 6000W per rack would be needed?

It depends how one differentiates between "want" and "need".

(We are not worried about cooling due to the special circumstances
of the space.)

ixp.aq? :wink:

Would someone pay extra for > 7KW in a rack?

They should. If they need more than 6kW, their alternative is to pay
for a second rack, which hardly would be free.

What would be the maximum you could ever see yourself needing in
order to power all 42U ?

1. For colo, think 1U dual-core servers with 3-4 HDD;
2. For routers, Google: juniper t640 kw.

HTH,
Eddy

Hopefully this classifies as on-topic...

I am discussing with some investors the possible setup of new datacenter space.

Obviously we want to be able to fill a rack, and in order to do so, we need to provide enough power to each rack.

Right now we are in spreadsheet mode evaluating different scenarios.

Are there cases where more than 6000W per rack would be needed?

10K per rack | ~ 400 watts/sqft is a common design point being used
by the large scale colocation/reit players. It's quite possible to exceed
with blade servers or high-density storage (Hitachi, EMC, etc) but it'd
take unusual business models today to exceed that on every rack.

(We are not worried about cooling due to the special circumstances of the space.)

So, even presuming an abundance of cold air right outside the facility,
you are still going to move the equipment generated heat to chillers
or cooling towers. It is quite likely that your HVAC plant will could be
your effective limit in ability to add power drops.

Would someone pay extra for > 7KW in a rack? What would be the maximum you could ever see yourself needing in order to power all 42U ?

Again, you can find single rack, 30" deep storage arrays/controllers
that will exceed 20KW, but the hope is that you've got a cabinet or
two of less dense equipment surrounding them. Best thing to do is
fine someone in the particular market segment you're aiming for and
ask them for some averages and trends, since it's going to vary widely
depending on webhosting/enterprise data center/content behemoth.

/John

This greatly depends on what you want to do with the space. If you're putting in co-lo space by the square footage footprint then your requirements will be much less. If you expect a large percentage of it to be leased out to an enterprise then you should expect the customers to use every last U in a cabinet before leasing the next cabinet in the row. Ie your power usage will be immense.

I did something similar about 2 years ago. We were moving a customer from one facility to another. We mapped out each cabinet including server models. I looked up maximum power consumption for each model including startup consumption. The heaviest loaded cabinet specced out at 12,000w. The cabinet was full of old 1U servers. New 1U servers are the worst-case scenario by far. 12k is rather low IMHO. Some industry analysts estimate that the power requirements for high-density applications scale as high as 40kw.

http://www.servertechblog.com/pages/2007/01/cabinet_level_p.html

There are a few things to remember. Code only permits you to load a circuit to 80% of its maximum-rated capacity. The remaining 20% is the safety margin required by the NEC. Knowing this that means that the 12Kw specified above require 7x 20a 120v circuits or 5x 30a 120v circuits.

You can get 20a and 30a horizontal PDUs for both 120v and 240v. There are also 208v options. You can also get up to 40a vertical PDUs. One word of caution about the vertical PDUs. If your cabinets aren't deep enough in the rear (think J Lo) the power cabling will get in the way of the rails and other server cabling. There are others but they are less common.

Also remember that many of the larger servers (such as the Dell 6850s or 6950s) are 240v and will require a pair of dedicated circuits (20a or 30a).

I would also recommend that you look into in-row power distribution cabinets like the Liebert FDC. This means shorter home-runs for the large number of circuits you'll be putting in (saving your a bundle in copper too). It also means less under-floor wiring to work around, making future changes much easier. Changes in distribution cabinets are also much easier, safer and less prone to accidents/mistakes than they are in distribution panels.

Grounding is a topic that is worthy of its own book. Consult an electrician used to working with data centers. Don't overlook this critical thing. Standby power sources fall into this topic as well. How many 3-phase generators are you going to need to keep your UPSs hot?

I'm curious what your cooling plans are. I would encourage you to consider geothermal cooling though. The efficiencies that geothermal brings to the table are worth you time to investigate.

Best of luck,
  Justin

Patrick Giagnocavo wrote:

What is the purpose of the datacenter computing, datacom/telco or both. AC or DC power feeds or both, backup power or naked, dual feed from the power company with transfer switch or power with generator backup? Are you dual feeding the racks? Do you require NEBs compliant racks to make it through a shake and bake ( a seismic event).

Many centers run DC 50 volt multi hundred amp battery and inverter systems. For AC the higher voltage, three phase is more efficient 220, 440 60 cycle. Power delivery can be at 10 - 12k volts with stepdown transformers in the facility. If the building feed is for the entire facility than you need home runs from the main power panels to your power backup and protection circuits.

When I worked for a certain large fiber backbone based provider of circuits and colo for each amount of rack space and racks you would get so much DC and AC power and to get more you would pay extra. All of the colo power had multi hour battery backup and a generator would kick in. They had a DC distribution plant with AC inverters.

Your largest issue may be grounding and the ground plan for the building and your datacenter in the building. If the building does not have a good ground plane and most do not, you may have to retrofit new grounding pads by digging outside the building or through the floor. You need to measure the potential to determine if their are any ground loops in the building i.e. you want the ground to be the same for all parts of the building. You need to put power and transient monitoring equipment on your power sources to verify no power spikes or large EMI coming into the building. No carpet in the room or you can have bad ESD blowing your equipment. Ground the cabinets and have grounding straps for anyone working on the equipment.
Check power conditioning equipment for being able to handle brownout conditions as well as actual power outages.

In certain areas of the country lightening protection may have to be enhanced for the building. Without adequate high voltage and current shunts and filters your equipment can be wiped out on a regular basis.

You want to locate the datacenter below ground but not in the basement. You want it in the interior of the building for better lightening and storm protection. You do not want it in a hundred year flood plane. or you may need to seal it against incoming water. You do not want to locate it near or below building plumbing. You will want to have non reversible drains to drain water but not backup and flood the facility. You do not want to locate it close to the elevators, building HVAC or other sources of large EM spikes. You may want to add EMI shielding for the room to reduce either EMI leaving or entering the room.

If the power requirements are large enough you will need to use a chiller system to adequately cool the room. This is putting water in your datacenter which is also not a good idea.

During power outages you need to continue to power the HVAC and building control or your facility can go down.

You will need to review structural plans for the building to see if the floors can handle the extra load or if the floors need to be re-enforced.

For security you want re-enforced concrete walls and floors for the room. If the current floors and walls are inadequate you may need to build a room within the room. If the walls are standard concrete block and steel you can run re-enforcing rods and concrete into the blocks. You want steel doors with magnetic locks, that can withstand sledge hammers and people driving into them. Add video surveillance, biometric readers and other sensors for your security systems.

Before you can do any construction of course you will need to get the appropriate city and county permits and permission from the building owners.

If you engineer the facility correctly it can take significant investment for a 5, 10 or 15 year investment period.

IMHO make sure you really want to do this.

Good luck

John (ISDN) Lee

I am discussing with some investors the possible setup of new datacenter space.

Obviously we want to be able to fill a rack, and in order to do so, we need to provide enough power to each rack.

Right now we are in spreadsheet mode evaluating different scenarios.

Are there cases where more than 6000W per rack would be needed?

Is this just for servers, or could there be network gear in the racks as well? We normally deploy our 6509s with 6000W AC power supplies these days and and I do have some that can draw close to or over 3000W on a continuous basis. A fully populated 6513 with power hungry blades could eat 6000W.

It's been awhile since I've tumbled the numbers, but I think a 42U rack full of 1U servers or blade servers could chew through 6000W and still be hungry. Are you also taking into account a worst-case situation, i.e. everything in the rack powering on at the same time, such as after a power outage?

(We are not worried about cooling due to the special circumstances of the space.)

Would someone pay extra for > 7KW in a rack? What would be the maximum you could ever see yourself needing in order to power all 42U ?

I don't know what you mean by 'extra', but I'd imagine that if someone needs 7KW or more in a rack, then they'd be prepared to pay for the amount of juice they use. This also means deploying a metering/monitoring solution so you can track how much juice your colo customers use and bill them accordingly.

Power consumption, both direct (by the equipment itself) and indirect (cooling required to dissipate the heat generated by said equipment) is a big issue in data center environments these days. Cooling might not be an issue in your setup, but it is a big headache for most large enterprise/data center operators.

jms

Easily. The HP blades I have right now are 14 servers in 10u, 6-7,000W. Breaker on it needs to be for over 10,000W. (30x208x1.73 for 30A 3 phase)

With our 1u servers, we're able to get about 12 or so in a rack with a 20A 208V single phase (my exact budget numbers are behind a vpn I don't feel like firing up...:wink: plus a pair of switches.

At 370W (peak), i'd need 15540W to power 42 of them, 27,972W at the breaker (I prefer 75 to 80% of breakered capacity vs the NEC's 85%). Works out to something like 4 30A 208v single phase circuits and 1 20A. So 29120W at the breaker.

That's a lot of hot air. :wink:

...david

In a message written on Sat, Mar 22, 2008 at 10:02:49PM -0400, Patrick Giagnocavo wrote:

Are there cases where more than 6000W per rack would be needed?

For a router/switch data points (this is NANOG, after all):

The CRS-1 in 16 slot or fabric chassis configuration takes a full
rack and needs ~11,000W.

6509-E's take dual 6000W power supplies. They are 15U, and I have
seen 3 of them stacked in a 48U cabinet (obviously doesn't work in
a 42U rack). That's 18,000W draw in a single cabinet.

I'm afraid 6000W is on the low end, by today's standards, and some
of the new 1RU multi-system chassis or blade servers can make these
numbers look puny. For instance:

http://www.themis.com/prod/hardware/res-12dcx.htm

Dual quad-core Xeons in a 1RU form factor. 600W power supply. 600W
* 42 = 25,200.

What do you expect your customers to bring? How long do you expect
your data center to last? Not that long ago people were building
5000W/rack data centers; often those places today have large empty
spaces, but are at their power and/or cooling limits.

Leo Bicknell wrote:

Dual quad-core Xeons in a 1RU form factor. 600W power supply. 600W
* 42 = 25,200.
  

Supermicro has the "1U Twin" which is 980W for two dual-slot machines in 1U form factor;
http://www.supermicro.com/products/system/1U/6015/SYS-6015TW-TB.cfm

If you can accommodate that, it should be pretty safe for anything else.

Pete

There comes a point where you cant physically transfer the energy using air
any more - not less you wana break the laws a physics captin (couldn't
resist sorry) - to your DX system, gas, then water, then in rack (expensive)
cooling, water and CO2. Sooner or later we will sink the hole room in oil,
much like they use to do with Cray's.

Alternatively we might need to fit the engineers with crampons, climbing
ropes and ice axes to stop them being blown over by the 70 mph winds in your
datacenter as we try to shift the volumes of area necessary to transfer the
energy back to the HVAC for heat pump exchange to remote chillers on the
roof.

In my humble experience, the problems are 1> Heat, 2> Backup UPS, 3> Backup
Generators, 4> LV/HV Supply to building.

While you will be very constrained by 4 in terms of upgrades unless spending
a lot of money to upgrade - the practicalities of 1,2&3 mean that you will
have spent a significant amount of money getting to the point where you need
to worry about 4.

Given you are not worried about 1, I wonder about the scale of the
application or your comprehension of the problem.

The bigger trick is planning for upgrades of a live site where you need to
increase Air con, UPS and Generators.

Economically, that 10,000KW of electricity has to be paid for in addition to
any charge for the rack space. Plus margined, credit risked and cash
flowed. The relative charge for the electricity consumption - which has
less about our ability to deliver and cool it in a single rack versus the
cost of having four racks in a 2,500KW datacenter and paying for the same
amount of electric. Is the racking charge really the significant expense
any more.

For the sake of argument, 4 racks at £2500 pa in a 2500KW datacenter or 1
rack at £10,000 pa in a 10000KW datacenter - which would you rather have?
Is the cost of delivering (and cooling) 10000KW to a rack more or less than
400% of the cost of delivering 2500KW per rack. I submit that it is more
that 400%. What about the hardware - per mip / cpu horse power am I paying
more or less in a conventional 1U pizza box format or a high density blade
format - I submit the blades cost more in Capex and there is no opex saving.
What is the point having a high density server solution if I can only half
fill the rack.

I think the problem is people (customers) on the whole don't understand the
problem and they can grasp the concept of paying for physical space, but
cant wrap their heads around the more abstract concept of electricity
consumed by what you put in the space and paying for that to come up with a
TCO for comparisons. So they simply see the entire hosting bill and
conslude they have to stuff as many processors as possible into the rack
space and if that is a problem is is one for the colo facility to deliver at
the same price.

I do find myself increasingly feeling that the current market direction is
simply stupid and had far to much input from sales and marketing people.

Let alone the question of is the customers business efficient in terms of
the amount of CPU compute power required for their business to generate 1$
of customer sales/revenue.

Just because some colo customers have cr*ppy business models delivering
marginal benefit for very high computer overheads and an inability to pay
for things in a manner that reflects their worth because they are incapable
of extracting the value from them. Do we really have to drag the entire
industry down to the lowest common denominator of f*ckwit.

Surly we should be asking exactly is driving the demand for high density
computing and in which market sectors and is this actually the best
technical solution to solve them problem. I don't care if IBM, HP etc etc
want to keep selling new shiny boxes each year because they are telling us
we need them - do we really? ...?

Kind Regards

Ben

Leo Bicknell wrote:
>
> Dual quad-core Xeons in a 1RU form factor. 600W power supply. 600W
> * 42 = 25,200.
>
Supermicro has the "1U Twin" which is 980W for two dual-slot
machines in 1U form factor;
Supermicro | Products | SuperServers | 1U | 6015TW-TB

If you can accommodate that, it should be pretty safe for
anything else.

My desktop has a 680 Watt power supply, but according to a meter I once
connected, it is only running at 350 to 400 Watts. So if a server has a
980W power supply, does the rack power need to be designed to handle
multiples of such a beast, even though the server may not come close
(because it may not be fully loaded with drives or whatever)? Wouldn't it
be better to do actual measurements to see the real draw might be?

This depends on who's providing the power. If it's your power and your servers, you can "know" that your 980W supplies are really only using 600W, be happy, and plan accordingly if you upgrade later.

If you're providing the power, but it's someone else's gear, you better have good communication when it comes to power requirements/utilization, because what happens when they install more drives/processors next month, and those systems that were using 600W suddenly are using 800W each?

When providing/planning UPS power, if you sell a 120V 20A circuit, do you budget 120V 20A of UPS power for that customer, or 16A (80%), or even slightly more than 20A (figuring worst case, they're going to overload their circuit at some point) when deciding how full that UPS is?

The startup draw can be quite a bit more. I think before all those fancy power saving features kick in, some of the servers we have can draw quite a bit on initial bootup as they spin the fans 100% and spin up disks etc.
I also find the efficiencies of boards really vary. In our spam scanning cluster we used some "low end" RS480 boards by ECS (AMD Socket 939). Cool to run to the point where on the bench you would touch the various heat sinks and wonder if it was powered up. This compared to some of our Tyan 939 "server boards" which could blister your finger if you touched the heat sink too long.

         ---Mike

Yes, if you perform the measurements both at peak cpu load
and during power-up (quite a bit of well-known gear maxes
out its power draw only during the power-on sequence).

Also, you're still going to want to size the power drop so that
the measured load won't exceed 80% capacity due to code.

/John

jcurran@mail.com (John Curran) writes:

Also, you're still going to want to size the power drop so that
the measured load won't exceed 80% capacity due to code.

that's true of output breakers, panel busbars, and wire. on the other
hand, transformers (e.g., 480->208 or 12K->480) are rated at 100%, as
are input breakers and of course generators.

Surly we should be asking exactly is driving the demand for
high density computing and in which market sectors and is
this actually the best technical solution to solve them
problem. I don't care if IBM, HP etc etc want to keep
selling new shiny boxes each year because they are telling us
we need them - do we really? ...?

Perhaps not. But until projects like <http://www.lesswatts.org/&gt;
show some major success stories, people will keep demanding
big blade servers.

Given that power and HVAC are such key issues in building
big datacenters, and that fiber to the office is now a reality
virtually everywhere, one wonders why someone doesn't start
building out distributed data centers. Essentially, you put
mini data centers in every office building, possibly by
outsourcing the enterprise data centers. Then, you have a
more tractable power and HVAC problem. You still need to
scale things but it since each data center is roughly comparable
in size it is a lot easier than trying to build out one
big data center.

If you move all the entreprise services onto virtual servers
then you can free up space for colo/hosting services.

You can even still sell to bulk customers because few will
complain that they have to deliver equipment to three
dara centers, one two blocks west, and another three blocks
north. X racks spread over 3 locations will work for everyone
except people who need the physical proximity for clustering
type applications.

--Michael Dillon

Ben Butler wrote:

There comes a point where you cant physically transfer the energy using air
any more - not less you wana break the laws a physics captin (couldn't
resist sorry) - to your DX system, gas, then water, then in rack (expensive)
cooling, water and CO2. Sooner or later we will sink the hole room in oil,
much like they use to do with Cray's.

The problem there is actually the thermal gradient involved. the fact of the matter is you're using ~15c air to keep equipment cooled to ~30c. Your car is probably in the low 20% range as far as thermal efficiency goes, is generating order of 200kw and has an engine compartment enclosing a volume of roughly half a rack... All that waste heat is removed by air, the difference being that it runs a around 250c with some hot spots approaching 900c.

Increase the width of the thermal gradient and you can pull much more heat out of the rack without moving more air.

15 years ago I would have told you that gallium arsenide would be a lot more common in general purpose semiconductors for precisely this reason. but silicon has proved superior along a number of other dimensions.