Asking for a friend, please contact me off list.
The ask:
Multi-core server + 32G memory (or 64G)
more than 1T storage space.
At least 4 10GE optical ports.
Linux OS
1 year term
Thanks
Walt
Asking for a friend, please contact me off list.
The ask:
Multi-core server + 32G memory (or 64G)
more than 1T storage space.
At least 4 10GE optical ports.
Linux OS
1 year term
Thanks
Walt
I can't help you, but I'm just awfully curious and must ask, why
specifically optical ports? Seems very strange and a limiting
requirement for upside that my imagination struggles to find.
Many of the reasons I've heard for folk going optical for servers at 10G vs. copper is power, and the ensuing thermal management that follows 10G copper installations vs. SFP+.
Of course, there is always the distance issue, but that will vary from user to user.
Power seems to be the biggest concern. Some folk even prefer DAC over RJ-45 for the same reason.
Others consider space and bulkiness, which is where fibre beats DAC and UTP.
Mark.
Many of the big DCs don't do copper xconns anymore, so if you have a server with optical NICs, you don't need a switch or media-converter.
Among the other reasons folks have given, the 10GBASE-T PHY has added latency beyond the basic packetization/serialization delay inherent to Ethernet due to the use of a relatively long line code plus LDPC. It's not much (2-4us which is still less than 1000BASE-T serialization+packetization latency with larger packets), but it's more than 10GBASE-R PHYs. The HFT guys may care, but most other folks probably don't give a hoot.
If it's in-rack or in-cage (or even in-contiguous-row racks), most data centres may permit your own copper x-connects.
Mark.
bryan@shout.net (Bryan Holloway) wrote:
Many of the big DCs don't do copper xconns anymore, so if you have a server
with optical NICs, you don't need a switch or media-converter.
Which is really detrimental if you need to OOB connect a server. IPMI ports are
generally copper; I suppose that will change, but it hasn't yet.
El "pissed off by some of those folks, really" mar
Unless others have done it differently, what I used to do was run fibre to whatever the local terminal server’s gateway router was, and use copper within or between (nearby) racks between the terminal server and the end device. Mark.
mark@tinka.africa (Mark Tinka) wrote:
Unless others have done it differently, what I used to do was run fibre to
whatever the local terminal server's gateway router was, and use copper
within or between (nearby) racks between the terminal server and the end
device.
Oh sure, if you have an entire rack (or more) to cable, most modern switches
will give you some copper ports, or you throw an extra one in.
We mostly drop single boxes...
Elmar.
I think this is the least bad explanation, some explanations are that
copper may not be available, but that doesn't explain preference. Nor
do I think wattage/heat explains preference, as it's hosted, so
customers probably shouldn't care. Latency could very well explain
preference, but it seems doubtful, when hardware is so underspecified,
surely if you are talking in single microseconds or nanoseconds
budget, the actual hardware becomes very important, so i think lack of
specificity there implies it's not about latency.
I'd bet the real answer is that someone wants to connect a commodity
server to an IX and pretend to be
some network/asn and then do some not terrific things with that setup
seen this in AMSIX and DECIX ... don't know that I've not seen it also
at 1-wilshire ;(
I don’t think you are going to get a single answer that has an overwhelming majority of support for fibre. Use-cases are very different, and what you will see in the field is some statistically insignificant representation of each of the reasons given. By and large, I can say the bulk of servers are cabled with copper, which would make fibre niche if you took a global view. On that basis, squabbling over the reason is inconsequential. Mark.
From a strictly physical cabling point of view, while 10GBaseT is likely to work on ordinary cat5e or cat 6 cabling at very short distances such as from a server to a top of rack aggregation switch, more successful results will be seen with cat6a.
Your typical cat 6A cable is significantly fatter in diameter, less flexible and takes up much more space inside vertical cabling management up and down the inside of a dense cabinet, compared to an ordinary figure-8 shaped duplex singlemode fiber patch cable. And even more space savings are possible with single tube/uniboot, 1.6 mm diameter patch cables.
Hi Eric,
All of these are excellent reasons why the DC -operator- should want
to use fiber in 10GE links.
The question was: why does a DC -customer- want 40 gigs of
specifically fiber optic connections in what is otherwise a minimum
server configuration, the sort that easily fits in 1U. The Linux
network stack would struggle to even drive 40 gigs; you'd be into very
custom network software built with something like DPDK but the guy
hasn't placed any conditions on the available network infrastructure
and connectivity except that it offer 4x 10gig fiber optic ethernet.
That's weird.
Regards,
Bill Herrin
"Weird" is not what I would use to describe it. Unusual, perhaps. I mean, vendors are producing optical NIC's for servers.
I'm aware of some deployments that struggled with availability of copper-based switches and NIC's 2020/2021, but SFP28 was available, so they moved to that. Again, a special case.
Copper will continue to dominate the server market for a while yet.
Mark.
This seems very plausible, considering the chosen demo. Thanks.
CoreSite now charges a disconnect fee for all cross-connects in addition to the MRC and connection fee.
If you don’t plan to cross-connect at CoreSite LA1 (One Wilshire), you may consider other nearby facilities. Most facilities are backhauled there anyway.
While it may be a plausible scenario, it IMO is highly unlikely (< 0.000000000001%) that this is the case in this situation, given the person that is asking…
Regards,
Christopher Hawker
I completely agree, the original “rfq” is super suspicious. There’s no need to require to be specifically at One Wilshire for a single 1U server (particularly with only 10GbE interfaces, not 100), since the most effective use of being at a major interconnect point like that is only if you’re prepared to incur the recurring monthly expense of many intra-building cross connects.
Realistic use by a small ISP that needs a presence there would be more like a minimum 1/4th of a cabinet in its own compartment.
As have so many others. It could be justified *if* they actually removed the
physical crossconnect.
My last visit to the site was >10 years ago, and by then they had *not* removed
any of the cabling, some of which looked like it was put in in the Seventies.
Has that changed in 1Wil?
Elmar.