Rasberry pi - high density

So I just crunched the numbers. How many pies could I cram in a rack?

Check my numbers?

48U rack budget
6513 15U (48-15) = 33U remaining for pie
6513 max of 576 copper ports

Pi dimensions:

3.37 l (5 front to back)
2.21 w (6 wide)
0.83 h
25 per U (rounding down for Ethernet cable space etc) = 825 pi

Cable management and heat would probably kill this before it ever reached completion, but lol...

The problem is, I can get more processing power and RAM out of two 10RU blade chassis and only needing 64 10G ports...

32 x 256GB RAM per blade = 8.1TB
32 x 16 cores x 2.4GHz = 1,228GHz
(not based on current highest possible, just using reasonable specs)

Needing only 4 QFX5100s which will cost less than a populated 6513 and give lower latency. Power, cooling and cost would be lower too.

RPi = 900MHz and 1GB RAM. So to equal the two chassis, you'll need:

1228 / 0.9 = 1364 Pis for compute (main performance aspect of a super computer) meaning double the physical space required compared to the chassis option.

So yes, infeasible indeed.

Regards,

Tim Raphael

From the work that I've done in the past with clusters, your need for

bandwidth is usually not the biggest issue. When you work with "big data",
let's say 500 million data points, most mathematicians would condense it
all down into averages, standard deviations, probabilities, etc, which then
become much smaller to save in your hard disks and also to perform data
analysis with, as well as transfer these stats from master to nodes and
vice-versa. So for one project at a time, your biggest concern is cpu
clock, ram, interrupts, etc. If you want to run all of the BIG 10s academic
projects into one big cluster for example, then networking might become an
issue solely due to volume.

The more data you transfer, the longer it would take to perform any
meaningful analysis on it, so really your bottleneck is TFLOPS rather than
packets per second. With Facebook it's the opposite, it's mostly pictures
and videos of cats coming in and out of the server with lots of reads and
writes on their storage. In that case, switching tbps of traffic is how
they make money.

A good example is creating a dockr container with your application and
deploying a cluster with CoreOS. You save all that capex and spend by the
hour. I believe Azure and EC2 already have support for CoreOS.

For another list I just estimated how many M.2 SSD modules one could
cram into a 3.5" disk case. Around 40 w/ some room to spare (assuming
heat and connection routing aren't problems), at 500GB/each that's
20TB in a standard 3.5" case.

It's getting weird out there.

I think the next logical step in servers would be to remove the traditional
hard drive cages and put SSD module slots that can be hot swapped. Imagine
inserting small SSD modules on the front side of the servers and directly
connect them via PCIe to the motherboard. No more bottlenecks and a
software RAID of some sorts would actually make a lot more sense than the
current controller based solutions.

>
>
> So I just crunched the numbers. How many pies could I cram in a rack?

For another list I just estimated how many M.2 SSD modules one could
cram into a 3.5" disk case. Around 40 w/ some room to spare (assuming
heat and connection routing aren't problems), at 500GB/each that's
20TB in a standard 3.5" case.

I could see liquid cooling such a device. insert the whole thing into oil.
how many pcie slots are allowed in the standards?

It's getting weird out there.

Try to project your mind forward another decade with capability/cost like this:

I hope humanity´s last act will be to educate the spambots past their current
puerile contemplation of adolescent fantasies and into contemplating faust.

At least some vendors are already doing that. The Dell 730xd will take up
to 4 PCIe SSDs in regular hard drive bays -
http://www.dell.com/us/business/p/poweredge-r730xd/pd
Nick

This feels like it should be a Friday thread. :slight_smile:

If you’re really going for density:

- At 0.83 inches high you could go 2x per U (depends on your mounting system and how much space it burns)
- I’d expect you could get at least 7 wide if not 8 with the right micro-USB power connector
- In most datacenter racks I’ve seen you could get at least 8 deep even with cable breathing room

So somewhere between 7x8x2 = 112 and 8x8x2 = 128 per U. And if you get truly creative about how you stack them you could probably beat that without too much effort.

This doesn’t solve for cooling, but I think even at these numbers you could probably make it work with nice, tight cabling.

-c

Pi dimensions:

3.37 l (5 front to back)
2.21 w (6 wide)
0.83 h
25 per U (rounding down for Ethernet cable space etc) = 825 pi

The parallella board is about the same size and has interesting
properties all by itself.
In addition to ethernet it also brings out a lot of pins.

there are also various and sundry quad core arm boards in the same form factor.

Cable management and heat would probably kill this before it ever reached completion, but lol…

This feels like it should be a Friday thread. :slight_smile:

If you’re really going for density:

- At 0.83 inches high you could go 2x per U (depends on your mounting system and how much space it burns)
- I’d expect you could get at least 7 wide if not 8 with the right micro-USB power connector
- In most datacenter racks I’ve seen you could get at least 8 deep even with cable breathing room

So somewhere between 7x8x2 = 112 and 8x8x2 = 128 per U. And if you get truly creative about how you stack them you could probably beat that without too much effort.

This doesn’t solve for cooling, but I think even at these numbers you could probably make it work with nice, tight cabling.

Dip them all in a vat of oil.

Pi dimensions:

3.37 l (5 front to back)
2.21 w (6 wide)
0.83 h
25 per U (rounding down for Ethernet cable space etc) = 825 pi

You butt up against major power/heat issues here in a single rack, not
that it's impossible. From what I could find the rPi2 requires .5A
min. The few SSD specs that I could find required something like .8 -
1.6A. Assuming that part of .5A is for driving a SSD, 1A/pi would be
an optimistic requirement. So 825-1600 amp in a single rack. It's
not crazy to throw 120AMP in a rack for higher density but you would
need room to put a PDU ever 2 u or so if you were running a 30amp
circus.

That's before switching infrastructure. You'll also need airflow since
that's not built into the pi. I've seen guys do this with mac minis
and they end up needing to push everything back in the rack 4 inches
to put 3 or 4 fans with 19in blades on the front door to make the
airflow data center ready.

So to start, you'd probably need to take a row out of the front of the
rack for fans and a row out of the back for power.

Cooling isn't really an issue since you can cool anything that you can
blow air on[1]. At 825 rpi @ 1Amp each, you'd get about 3000 btu/h
(double for the higher power estimate). You'd need need 3-6 tons of
avalible cooling capacity without redundancy.

I don't know how to do the math for the 'vat of oil scenario'. It's
not something I've ever wanted to work with.

In the end, I think you end up putting way too much money
(power/cooling) into the redundant green board around the CPU.

This feels like it should be a Friday thread. :slight_smile:

Maybe I'm having a read only may 10-17.

1. Please don't list the things that can't be cooled by blowing air.

As it turns out, I've been playing around benchmarking things lately using the tried and true
UnixBench suite and here are a few numbers that might put this in some perspective:

1) My new Rapsberry pi (4 cores, arm): 406
2) My home i5-like thing (asus 4 cores, 16gb's from last year): 3857
3) AWS c4.xlarge (4 cores, ~8gb's): 3666

So you'd need to, uh, wedge about 10 pi's to get one half way modern x86.

Mike

Interesting! Knowing a pi costs approximately $35, then you need
approximately $350 to get near an i5.. The smallest and cheapest desktop
you can get that would have similar power is the Intel NUC with an i5 that
goes for approximately $350. Power consumption of a NUC is about 5x that of
the raspberry pi, but the number of ethernet ports required is 10x less.
Usually in a datacenter you care much more about power than switch ports,
so in this case if the overhead of controlling 10x the number of nodes is
worth it, I'd still consider the raspberry pi. Did I miss anything? Just a
quick comparison.

That is .8-1.6A at 5v DC. A far cry from 120V AC. We're talking ~5W versus ~120W each.

Granted there is some conversion overhead, but worst case you are probably talking about 1/20th the power you describe.

-Randy

Did I miss anything? Just a quick comparison.

If those numbers are accurate, then it leans towards the NUC rather than the Pi, no?

Perf: 1x i5 NUC = 10x Pi
$$: 1x i5 NUC = 10x Pi
Power: 1x i5 NUC = 5x Pi

So...if a single NUC gives you the performance of 10x Pis at the capital cost of 10x Pis but uses half the power of 10x Pis and only a single Ethernet port, how does the Pi win?

Yeah, missed that. You'd still need fans probably for air flow but
the power would be a non issue.

It's pretty interesting what you can do with immersion cooling. I work
with it at $DAYJOB. Similar to air cooling, but your coolant flow rates
are much lower than air, and you don't need any fans in the systems--The
pumps take the place of those.

We save a lot of money on the cooling side, since we don't need to
compress and expand gases/liquids. We can run with warmish (25-30C)
water from cooling towers, and still keep the systems at a target
temperature of 35C.

--Chris

His estimates seem to consider that it's only 5V, though. He has 825 Pis per rack at ~5-10W each is call it ~8kW on the high end. 8kW is 2.25 tons of refrigeration at first cut, plus any power conversion losses, losses in ducting/chilled water distribution, etc. Calling for at least 3 tons of raw cooling capacity for this rack seems reasonable.

8kW/rack is something it seems many a typical computing oriented datacenter would be used to dealing with, no? Formfactor within the rack is just a little different which may complicate how you can deliver the cooling - might need unusually forceful forced air or a water/oil type heat exchanger for the oil immersion method being discussed elsewhere in the thread.

You still need giant wires and busses to move 800A worth of current. It almost seems like you'd have to rig up some sort of 5VDC bus bar system along the sides of the cabinet and tap into it for each shelf or (probably the approach I'd look at first, instead) give up some space on each shelf or so for point-of-load power conversion (120 or 240VAC to 5VDC using industrial "brick" style supplies or similar) and conventional AC or "high voltage" (in this context, 48 or 380V is "high") DC distribution to each shelf. Getting 800A at 5V to the rack with reasonable losses is going to need humongous wires, too. Looks like NEC calls for something on the order of 800kcmil under rosy circumstances just to move it "safely" (which, at only 5V, is not necessarily "effectively") - yikes that's a big wire.

Maybe I messed up the math in my head, my line of thought was one pi is
estimated to use 1.2 watts, whereas the nuc is at around 65 watts. 10 pi's
= 12 watts. My comparison was 65watts/12watts = 5.4 times more power than
10 pi's put together. This is really a rough estimate because I got the
NUC's power consumption from the AC/DC converter that comes with it, which
has a maximum output of 65 watts. I could be wrong (up to 5 times) and
still the pi would use less power.

Now that I think about it, the best way to simplify this is to calculate
benchmark points per watt, so rasp pi is at around 406/1.2 which equals
338. The NUC, roughly estimated to be at 3857/65 which equals 60. Let's be
very skeptical and say that at maximum consumption the pi is using 5 watts,
then 406/5 is around 81. At this point the rasp pi still scores better.

Only problem we are comparing ARM to x86 which isn't necessarily fair (i am
not an expert in computer architectures)

Rather then guessing on power consumption, I measured it.

I took a Pi (Model B - but I suspect B+ and the new version is relatively
similar in power draw with the same peripherials), hooked it up to a lab
power supply, and took a current measurement. My pi has a Sandisk SD card
and a Sandisk USB stick plugged into it, so, if anything, it will be a bit
high in power draw. I then fired off a tight code loop and a ping -f from
another host towards it, to busy up the processor and the network/USB on
the Pi. I don't have a way of making the video do anything, so if you were
using that, your draw would be up. I also measured idle usage (sitting at
a command prompt).

Power draw was 2.3W under load, 2.0W at idle.

If it was my project, I'd build a backplane board with USB-to-ethernet and
ethernet switch chips, along with sockets for Pi compute modules (or
something similar). I'd want one power cable and one network cable per
backplane board if my requirements allowed it. Stick it all in a nice card
cage and you're done.

As for performance per watt, I'd be surprised if this beat a modern video
processor for the right workload.

Here's someone's comparison between the B and B+ in terms of power:

http://raspi.tv/2014/how-much-less-power-does-the-raspberry-pi-b-use-than-the-old-model-b