Data Center Wiring Standards

Heya folks,

I hope this is on-topic. I read the charter, and it falls somewhere along
the fuzzy border I think...

Can anyone tell me the standard way to deal with patch panels, racks, and
switches in a data center used for colocation? I've a sneaking suspicion
that we're doing it in a fairly non-scalable way. (I am not responsible
for the current method, and I think I'm glad to say that.) Strangely
enough, I can find like NO resources on this. I've spent the better part
of two hours looking.

Right now, we have a rack filled with nothing but patch panels. We have
some switches in another rack, and colocation customers scattered around
other racks. When a new customer comes in, we run a long wire from their
computer(s) and/or other device(s) to the patch panel. Then, from the
appropriate block connectors on the back of the panel, we run another wire
that terminates in a RJ-45 to plug into the switch.

Sounds bonkers I think, doesn't it?

My thoughts go like this: We put a patch panel in each rack. Each of
these patch panels is permanently (more or less) wired to a patch panel in
our main patch cabinet. So, essentially what you've got is a main patch
cabinet with a patch panel that corresponds to a patch panel in each other
cabinet. Making connection is cinchy and only requires 3-6 foot
off-the-shelf cables.

Does that sound more correct?

I talked to someone else in the office here, and they believe that they've
seen it done with a switch in each cabinet, although they couldn't
remember is there was a patch panel as well. If you're running 802.1q
trunks between a bunch of switches (no patch-panels needed), I can see
that working too, I suppose.

Any standards? Best practices? Suggestions? Resources, in the form of
books, web pages, RFCs, or white papers?

Thanks!

Rick Kunkel

Hello Rick,

Does that sound more correct?

I talked to someone else in the office here, and they believe that they've
seen it done with a switch in each cabinet, although they couldn't
remember is there was a patch panel as well. If you're running 802.1q
trunks between a bunch of switches (no patch-panels needed), I can see
that working too, I suppose.

Thats the best solution I think.
Less cables, less work for precabling.
Build 1gige or 2x1gige fiber uplinks.
Perhaps wheels on the racks (then you can play google).

You have check what switches you use in the racks and in the core.
Number of vlans supported is the main goal.
Perhaps some hp procurve in the racks and some real coreswitches in the backbone.

When you reach 4096 vlans you could use vlan in vlan or mpls to grow more.

Kind regards,
   Ingo Flaschberger

[ Disclaimer - my experience is as someone who has setup lots of racks,
dealt with a number of colocation facilities and cabling contractors.
However, I haven't ever run a colo. ]

Can anyone tell me the standard way to deal with patch panels, racks,
and switches in a data center used for colocation?

Right now, we have a rack filled with nothing but patch panels. We
have some switches in another rack, and colocation customers scattered
around other racks. When a new customer comes in, we run a long wire
from their computer(s) and/or other device(s) to the patch panel.
Then, from the appropriate block connectors on the back of the panel,
we run another wire that terminates in a RJ-45 to plug into the
switch.

This way of doing things *can* be done neatly in some cases - it really
depends on how you have things setup, your size, and what your
customers' needs are.

For large carrier neutral places like Equinix, Switch and Data, etc.,
where each customer usually has a small number of links coming into
their cage, and things are pretty non-standard (i.e., customers have
stuff other than a few ethernet cables going to their equipment), that's
pretty much what they do - run a long cable through overhead cable
trough or fiber tray, and terminate it in a patch panel in the
customer's rack.

My thoughts go like this: We put a patch panel in each rack. Each of
these patch panels is permanently (more or less) wired to a patch
panel in our main patch cabinet. So, essentially what you've got is a
main patch cabinet with a patch panel that corresponds to a patch
panel in each other cabinet. Making connection is cinchy and only
requires 3-6 foot off-the-shelf cables.

This is a better way to do it IF your customers have pretty standard
needs. One facility I've worked at has 6 cables bundled together (not 25
pr cable, but similar - 6 cat5 or cat6 cables bundled within some sort
of jacket), going into a patch panel. 25 pair or bundled cabling will
make things neater, but usually costs more.

Obviously, be SUPER anal retentive about labelling, testing, running
cables, etc., or it's not worth doing at all. Come up with a scheme for
labelling (in our office, it's "a.b.c where a is the rack number, b is
the rack position, and c is the port number) and stick to it. Get a
labeller designed for cables if you don't already have one (a Brady,
industrial P-Touch, Panduit, or something similar). Make sure there is a
standard way for everything, and document / enforce the standard.
Someone has to be the cable n**i (does that count as a Godwin?) or
things will get messy fast.

If you're doing a standard setup to each rack, hire someone to do it for
you if you can afford it. It will be expensive, but probably worth it
unless you're really good (and fast) at terminating cable.

Either way, use (in the customer's rack) one of the patch panels that's
modular, so you can put a different kind of connector in each slot. That
gives you more flexibility later.

In terms of whether patch panels / switches should be mixed in the same
rack; opinions differ. It's of course difficult to deal with terminating
patch panels when there are also big fat switches in the same rack.

I've usually done a mix anyway, but for your application, it might be
better to alternate, running the connections sideways.

Invest in lots of cable management, the bigger, the better. I assume you
already have cable management on these racks?

I like the Panduit horizontal ones, and either the Panduit vertical
ones, or the CPI "MCS" ones. If you're doing a new buildout, or can
start a new set of racks, put extra space between them and do 10" wide
cable management sections (or bigger).

I can give you some suggestions in terms of vendors and cabling outfits,
though most of the people I know of are in the Southern California area.

I talked to someone else in the office here, and they believe that
they've seen it done with a switch in each cabinet, although they
couldn't remember is there was a patch panel as well.

Ok, so if most of your customers have a full rack or half rack, I would
suggest not putting a switch in each rack. In that case, you should
charge them a port fee for each uplink, which should encourage them to
use their own networking equipment.

Now if most of your customers are using < 1/2 rack, and aren't setting
up their own network equipment, and you're managing everything for them,
then you might want to put 1 48 port / 2 24 port switch in each
individual rack, with two uplinks from some central aggregation switches
to each.

I really don't think you want more than 4-6 cables going to any one
rack.

Maybe you can clarify your typical customer setup?

Any standards? Best practices? Suggestions? Resources, in the
form of books, web pages, RFCs, or white papers?

I think the best thing is just to look around as much as possible, and
then see what works (and doesn't work) for you. I think some of the
manufacturers of cable, cable management equipment and stuff may publish
some standards / guidelines as well.

w

My thoughts go like this: We put a patch panel in each rack. Each of
these patch panels is permanently (more or less) wired to a patch panel in
our main patch cabinet. So, essentially what you've got is a main patch
cabinet with a patch panel that corresponds to a patch panel in each other
cabinet. Making connection is cinchy and only requires 3-6 foot
off-the-shelf cables.

Does that sound more correct?

I talked to someone else in the office here, and they believe that they've
seen it done with a switch in each cabinet, although they couldn't
remember is there was a patch panel as well. If you're running 802.1q
trunks between a bunch of switches (no patch-panels needed), I can see
that working too, I suppose.

Any standards? Best practices? Suggestions? Resources, in the form of
books, web pages, RFCs, or white papers?

Theres a series of ISO Standard for data cabling but nothing is yet set in stone around datacentres. I think the issue of Standards in datacentres was touched on here some time back?

Ok, a quick google later,

TIA-942 Telecommunications Infrastructure Standards for Data Centres covers off a lot of the details. Its pretty new and I don't know if its fully ratified yet?

I quote...

--8<--
Based on existing cabling standards, TIA-942 covers cabling distances, pathways and labeling requirements, but also touches upon site selection, demarcation points, building security and electrical considerations. As the first standard to specifically address data centres, TIA-942 is a valuable tool for the proper design, installation and management of data centre cabling.

The standard provides specifications for pathways, spaces and cabling media, recognizing copper cabling, multi-mode and single-mode fiber, and 75-ohm coaxial cable. However, much of TIA-942 deals with facility specifications. For each space within a data centre, the standard defines equipment planning and placement based on a hierarchical star topology for backbone and horizontal cabling. The standard also includes specifications for arranging equipment and racks in an alternating pattern to create ìhotî and ìcoldî aisles, which helps airflow and cooling efficiency.

To assist in the design of a new data centre and to evaluate the reliability of an existing data centre, TIA-942 incorporates a tier classification, with each tier outlining guidelines for equipment, power, cooling and redundant components. These guide-lines are then tied to expectations for the data centre to maintain service without interruption.

--8<--

The source url for the above was http://www.networkcablingmag.ca/index.php?option=com_content&task=view&id=432&Itemid=2. You may like to see if you can track down a copy of the referenced standard.

You have a couple of options depending on your switching infrastructure and required cabling density - and bandwidth requirements. One way would be to have a decent switch at the top of each cabinet along with a Fibre tie to your core patch / switching cabinet. All devices in that rack feed into the local switch, which could be VLAN'd as required to cater for ILO or any other IP management requirements. Uplink would be a trunk of 1000SX, 1000LX, MultiLink Trunk combinations of same, or perhaps even 10Gig Fibre.

The other option would be to preconfigure each rack with a coupla rackunits of fixed copper or fibre ties to a core cabinet and just patch things around as you need to. Useful if you are in a situation where bringing as much as possible direct into your core switch is appropriate, and cheaper from a network hardware pov - if not from a structure cabling pov.

Good luck. I know what a prick it is to inhereit someone elses shoddy cable work - I find myself accumulating lots of after-hours overtime, involving essentially ripping out everything and putting it all back _tidily_ - and hoping that I don't overlook some un-documented 'feature'...

Mark.

Rick,

The organization and standards you are looking for are:

BICSI - and TIA/EIA 568 et al for structured cabling design for low voltage distribution. The BICSI organization has training and certification for RCDD Registered Communications Distribution Designer A BICSI article that is on there web site about data center design is . TIA/EIA 568(ab) how ever many they are up to discuss structured cabling design for UTP/STP/fiber/coax including patch cables single and multi pair UTP/STP/fiber patch panels, HVAC control, fire system control and security systems. John (ISDN) Lee

Rick Kunkel wrote:

Rick Kunkel wrote:

Heya folks,

I hope this is on-topic. I read the charter, and it falls somewhere along
the fuzzy border I think...

Can anyone tell me the standard way to deal with patch panels, racks, and
switches in a data center used for colocation? I've a sneaking suspicion
that we're doing it in a fairly non-scalable way. (I am not responsible
for the current method, and I think I'm glad to say that.) Strangely
enough, I can find like NO resources on this. I've spent the better part
of two hours looking.

Right now, we have a rack filled with nothing but patch panels. We have
some switches in another rack, and colocation customers scattered around
other racks. When a new customer comes in, we run a long wire from their
computer(s) and/or other device(s) to the patch panel. Then, from the
appropriate block connectors on the back of the panel, we run another wire
that terminates in a RJ-45 to plug into the switch.

Sounds bonkers I think, doesn't it?

My thoughts go like this: We put a patch panel in each rack. Each of
these patch panels is permanently (more or less) wired to a patch panel in
our main patch cabinet. So, essentially what you've got is a main patch
cabinet with a patch panel that corresponds to a patch panel in each other
cabinet. Making connection is cinchy and only requires 3-6 foot
off-the-shelf cables.

Does that sound more correct?

I talked to someone else in the office here, and they believe that they've
seen it done with a switch in each cabinet, although they couldn't
remember is there was a patch panel as well. If you're running 802.1q
trunks between a bunch of switches (no patch-panels needed), I can see
that working too, I suppose.

Any standards? Best practices? Suggestions? Resources, in the form of
books, web pages, RFCs, or white papers?

Thanks!

Rick Kunkel

Ideally from each core router would go to a two distribution-a switch (Cat 4900 or something similar), from both dist-a switch then go to two bigger distribution (dist-b) switches (cat 6500 etc) Then from each 6500 go to there own patch panels. Then from the two patch panels run a cables to access level (2900's etc) switches in each rack / shelf. This way you have full redundancy in each shelf for your co-located / dedicated customers.

My .02 cents

-Bill Sehmel

Rick Kunkel <kunkel@w-link.net> writes:

Can anyone tell me the standard way to deal with patch panels, racks, and
switches in a data center used for colocation?

Network Cabling Handbook by Chris Clark is a bit dated (5 years old)
but probably should be on your bookshelf anyway, particularly since it
is ridiculously cheap used/new on Amazon (I got my copy a couple of
years ago after a friend tipped me off that they were on sale for
$5.99 on clearance at Micro Center). It's mostly geared to the
enterprise but it does have a chapter on doing communication rooms
which is probably a good starting point. ISBN 0-07-213233-7

Also, no substitute for visiting your competition and taking a survey
of how others, particularly larger datacenters, are doing it. :slight_smile:

                                        ---Rob

As many have mentioned here, TIA/EIA-942 is a good starting point. There are a
couple of good Data Centers books out there, also (a visit to your local
Borders or B&N could allow for an interesting afternoon browsing). I have
personally had positive experience with some docs and advice from some folks
with expertise in cable management and data centers infrastructure:

http://www.panduit.com/enabling_technologies/091903.asp

HTH,
Stefan