Would you, please, share your thoughts on the following matter?
Back some 5 years ago we pulled the trigger and started phasing out Cisco and Juniper switching products out of our data centers (reasons for that are not quite relevant to the topic). We selected Dell switches in part due to Dell using "quick rails’’ (sometimes known as speed rails or toolless rails). This is where both the switch side rail and the rack side rail just snap in, thus not requiring a screwdriver and hands of the size no bigger than a hamster paw to hold those stupid proprietary screws (lookin at your, cisco) to attach those rails.
We went from taking 16hrs to build a row of compute (from just network equipment racking pov) to maybe 1hr… (we estimated that on average it took us 30 min to rack a switch from cut open the box with Juniper switches to 5 min with Dell switches)
Interesting tidbit is that we actually used to manufacture custom rails for our Juniper EX4500 switches so the switch can be actually inserted from the back of the rack (you know, where most of your server ports are…) and not be blocked by the zero-U PDUs and all the cabling in the rack. Stock rails didn’t work at all for us unless we used wider racks, which then, in turn, reduced floor capacity.
As far as I know, Dell is the only switch vendor doing toolless rails so it’s a bit of a hardware lock-in from that point of view.
So ultimately my question to you all is how much do you care about the speed of racking and unracking equipment and do you tell your suppliers that you care? How much does the time it takes to install or replace a switch impact you?
I was having a conversation with a vendor and was pushing hard on the fact that their switches will end up being actually costlier for me long term just because my switch replacement time quadruples at least, thus requiring me to staff more remote hands. Am I overthinking this and artificially limiting myself by excluding vendors who don’t ship with toolless rails (which is all of them now except Dell)?
We don’t care. We rack up switches maybe once or twice a year. It’s just not worth the effort to streamline. If we were installing dozens of switches a month, maybe. But personally I think it’s crazy to make rackability your primary reason for choosing a switch vendor. Do you base your automobile purchase decision on how easy it is to replace windshield wipers?
So ultimately my question to you all is how much do you care about the speed of racking and unracking equipment and do you tell your suppliers that you care? How much does the time it takes to install or replace a switch impact you?
I was having a conversation with a vendor and was pushing hard on the fact that their switches will end up being actually costlier for me long term just because my switch replacement time quadruples at least, thus requiring me to staff more remote hands. Am I overthinking this and artificially limiting myself by excluding vendors who don't ship with toolless rails (which is all of them now except Dell)?
My 2¢ opinion / drive by comment while in the break room to get coffee and a doughnut is:
Why are you letting -- what I think is -- a relatively small portion of the time spent interacting with a device influence the choice of the device?
In the grand scheme of things, where will you spend more time interacting with the device; racking & unracking or administering the device throughout it's life cycle? I would focus on the larger portion of those times.
Sure, automation is getting a lot better. But I bet that your network administrators will spend more than an hour interacting with the device over the multiple years that it's in service. As such, I'd give the network administrators more input than the installers racking & unracking. If nothing else, break it down proportionally based on time and / or business expense for wages therefor.
Thanks for your time in advance!
The coffee is done brewing and I have a doughnut, so I'll take my leave now.
As far as I know, Dell is the only switch vendor doing toolless rails
Having fought for hours trying to get servers with those
rails into some DCs racks I'd go with slightly slow but fits
everywhere
*So ultimately my question to you all is how much do you care
about the speed of racking and unracking equipment
I don't care as long as it fits in the rack properly, the time
taken to do that is small compared to the time it'll be there (many
years for us). I use an electric screwdriver if I need to do many. I
care more about what is inside the box than the box itself, I'll
have to deal with their software for years.
Very little. I don't even consider it when comparing hardware. It's a nice-to-have but not a factor in purchasing.
You mention a 25-minute difference between racking a no-tools rail kit and one that requires a screwdriver. At any reasonable hourly rate for someone to rack and stack that is a very small percentage of the cost of the hardware. If a device that takes half an hour to rack is $50 cheaper than one that has the same specs and takes five minutes, you're past break-even to go with the cheaper one.
Features, warranty, performance over the lifetime of the hardware are far more important to me.
If there were a network application similar to rock band going on tour where equipment needed to be racked up, knocked down, and re-racked multiple times a week it would definitely be a factor. Not so much in a data center where you change a switch out maybe once every five years.
And there's always the case where all of that fancy click-together hardware requires square holes and the rack has threaded holes so you've got to modify it anyway.
You mention a 25-minute difference between racking a no-tools rail kit and
one that requires a screwdriver. At any reasonable hourly rate for someone
to rack and stack that is a very small percentage of the cost of the
hardware. If a device that takes half an hour to rack is $50 cheaper than
one that has the same specs and takes five minutes, you're past break-even
to go with the cheaper one.
I can understand the OP if his job is to provide/resell the switch and rack it
and then someone else (the customer) is operating it
As my fellow netops said, the switches are installed for a long time in the
racks (5+ years). I accept to trade installation easyness for
performance/feature/stability. When I need to replace it, it is never in a hurry
(and cabling properly takes more time than racking).
So easy installed rails may be a plus but far behind enything else.
Interesting tidbit is that we actually used to manufacture custom rails for our Juniper EX4500 switches so the switch can be actually inserted from the back of the rack (you know, where most of your server ports are...) and not be blocked by the zero-U PDUs and all the cabling in the rack. Stock rails didn't work at all for us unless we used wider racks, which then, in turn, reduced floor capacity.
Hi Andrey,
If your power cable management horizontally blocks the rack ears,
you're doing it wrong. The vendor could and should be making life
easier but you're still doing it wrong. If you don't want to leave
room for zero-U PDUs, don't use them. And point the outlets towards
the rear of the cabinet not the center so that installation of the
cables doesn't block repair.
So ultimately my question to you all is how much do you care about the speed of racking and unracking equipment and do you tell your suppliers that you care? How much does the time it takes to install or replace a switch impact you?
I care, but it bothers me less that the inconsiderate air flow
implemented in quite a bit of network gear. Side cooling? Pulling air
from the side you know will be facing the hot aisle? Seriously, the
physical build of network equipment is not entirely competent.
The speed rails are nice, and are effective in optimizing the time it takes to rack equipment. It’s pretty much par for the course on servers today (thank goodness!), and not so much on network equipment. I suppose the reasons being what others have mentioned - longevity of service life, frequency at which network gear is installed, etc. As well, a typical server to switch ratio, depending on number of switch ports and fault-tolerance configurations, could be something like 38:1 in dense 1U server install. So taking a few more minutes on the switch installation isn’t so impactful - taking a few more minutes on each server installation can really become a problem.
A 30-minute time to install a regular 1U ToR switch seems a bit excessive. Maybe the very first time a tech installs any specific model switch with a unique rail configuration. After that one, it should be around 10 minutes for most situations. I am assuming some level of teamwork where there is an installer at the front of the cabinet and another at the rear, and they work in tandem to install cage nuts, install front/rear rails (depending on switch), position the equipment, and affix to the cabinet. I can see the 30 minutes if you have one person, it’s a larger/heavier device (like the EX4500) and the installer is forced to do some kind of crazy balancing act with the switch (not recommended), or has to use a server lift to install it.
Those speed rails as well are a bit of a challenge to install if it’s not a team effort. So, I’m wondering if in addition to using speed rails, you may have changed from a one-tech installation process to a two-tech team installation process?
30 minutes to pull a switch from the box stick ears on it and mount it in the rack seems like a realllllly long time. I think at tops that portion it that’s a 5-10 minute job if I unbox it at my desk. I use a drill with the correct toque setting and a magnetic bit to put them on while it boots on my desk so I can drop a base config on it.
If you are replacing defective switches often enough that this is another issue I think you would have bigger issues than this to address.
Like others said that most switches are in the rack for the very long haul, often in excess of 5 years. The amount of time required to do the initial install is insignificant in the grand scheme of things.
That time you take to rack devices with classic rail can be viewed as a bounding moment and, while appreciated by the device, will reducing downtime issues on the long run that you may have if you just rack & slap 'em.
I work in upper education, we have hundreds upon hundreds of switches in at least a hundred network closets, as well as multiple datacenters, etc. We do a full lease refresh every 3-5 years of the full environment. The amount of time it takes me to get a switch out of a box/racked is minimal compared to the amount of time it takes for the thing to power on. (In that it usually takes about 3 minutes, potentially less, depending on my rhythm). Patching a full 48 ports (correctly) takes longer that racking. Maybe that’s because I have far too much practice doing this at this point.
If there’s one time waste in switch install, from my perspective, it’s how long it takes the things to boot up. When I’m installing the switch it’s a minor inconvenience. When something reboots (or when something needs to be reloaded to fix a bug – glares at the Catalyst switches in my life) in the middle of the day, it’s 7-10 minutes of outage for connected operational hosts, which is… a much bigger pain.
So long story short, install time is a near-zero care in my world.
That being said, especially when I deal with 2 post rack gear – the amount of sag over time I’m expected to be OK with in any given racking solution DOES somewhat matter to me. (glares again at the Catalyst switches in my life). Would I like good, solid, well manufactured ears and/or rails that don’t change for no reason between equipment revisions? Heck yes.
Hmm, I haven't had any of those on any of my Dell switches, but then
again, I haven't bought in in awhile.
You mention about hardware lockin, but I wouldn't trust Dell to not switch
out the design on their "next-gen" product, when they buy from a
different OEM, as they are want to do, changing from OEM to OEM for
each new product line. At least that is their past behavior over many years
in the past that I've been buying Dell switches for simple things.
Perhaps they've changed their tune.
For me, it really doesn't take all that much time to mount cage nuts
and screw a switch into a rack. Its all pretty 2nd nature to me, look
at holes to see the pattern, snap in all my cage nuts all at once and
go. If you are talking rows of racks of build, it should be 2nd nature?
Also, I hate 0U power, for that very reason, there's never room to
move devices in and out of the rack if you do rear-mount networking.
You mention about hardware lockin, but I wouldn't trust Dell to not switch
out the design on their "next-gen" product, when they buy from a
different OEM, as they are want to do, changing from OEM to OEM for
each new product line. At least that is their past behavior over many years
in the past that I've been buying Dell switches for simple things.
Perhaps they've changed their tune.
That sounds very much like their 2000's-era behaviour when they were
sourcing 5324's from Accton, etc. Dell has more recently acquired
switch companies such as Force10 and it seems like they have been
doing more in-house stuff this last decade. There has been somewhat
better stability in the product line IMHO.
For me, it really doesn't take all that much time to mount cage nuts
and screw a switch into a rack. Its all pretty 2nd nature to me, look
at holes to see the pattern, snap in all my cage nuts all at once and
go. If you are talking rows of racks of build, it should be 2nd nature?
The quick rails on some of their new gear is quite nice, but the best
part of having rails is having the support on the back end.
Also, I hate 0U power, for that very reason, there's never room to
move devices in and out of the rack if you do rear-mount networking.
Interesting tidbit is that we actually used to manufacture custom rails for our Juniper EX4500 switches so the switch can be actually inserted from the back of the rack (you know, where most of your server ports are...) and not be blocked by the zero-U PDUs and all the cabling in the rack. Stock rails didn't work at all for us unless we used wider racks, which then, in turn, reduced floor capacity.
Inserting switches into the back of the rack, where its nice and hot, usually suggests having reverse air flow hardware. Usually not stock.
Also, since its then sucking in hot air (from the midpoint of the cab or so), it is still hotter than having it up front, or leaving the U open in front.
On the other hand, most switches are quite fine running much hotter than servers with their hard drives and overclocked CPU's. Or perhaps thats why you keep changing them.....
Personally I prefer pre-wiring front-to-back with patch panels in the back. Works for fiber and copper RJ, not so much all-in-one cables.
I care, but it bothers me less that the inconsiderate air flow
implemented in quite a bit of network gear. Side cooling? Pulling air
from the side you know will be facing the hot aisle? Seriously, the
physical build of network equipment is not entirely competent.
Which - why do I have to order different part numbers for back to front
airflow? It's just a fan, can't it be made reversible? Seems like that
would be cheaper than stocking alternate part numbers.
Which - why do I have to order different part numbers for back to front airflow? It's just a fan, can't it be made reversible? Seems like that would be cheaper than stocking alternate part numbers.
The fan is inside the power supply right next to the high-voltage capacitors. You shouldn't be near that without proper training.
Last rack switch I bought, no fan was integrated into the power
supply. Instead, a blower module elsewhere forced air past the various
components including the power supply. Efficient power supplies (which
you really should be using in 24/7 data centers) don't even generate
all that much heat.
* cma@cmadams.net (Chris Adams) [Sat 25 Sep 2021, 00:17 CEST]:
>Which - why do I have to order different part numbers for back to
>front airflow? It's just a fan, can't it be made reversible?
>Seems like that would be cheaper than stocking alternate part
>numbers.
The fan is inside the power supply right next to the high-voltage
capacitors. You shouldn't be near that without proper training.
I wasn't talking about opening up the case, although lots of fans are
themselves hot-swappable, so it should be possible to do without opening
anything. They are just DC motors though, so it seems like a fan could
be built to reverse (although maybe the blade characteristics don't work
as well in the opposite direction).