I would like to know if anyone has seen one of these? If so where? Also if
they don't exist why? It would seem to me that it would make it a lot
easier to play mix and match with fiber in the DC if they did. Would be so
hard to make the 1G SFPs faster (trying to be funny here not arrogant).
I would like to know if anyone has seen one of these? If so where? Also if
they don't exist why? It would seem to me that it would make it a lot
easier to play mix and match with fiber in the DC if they did. Would be so
hard to make the 1G SFPs faster (trying to be funny here not arrogant).
the current chipsets don't fit in the the power/cooling budget of a spf+
transceiver envelope
What I want to see is reasonably priced 40G single mode transceivers.
I have no idea why 40G and now 100G wasn't rolled out with single mode as the preference. The argument that "there's a large multimode install base" doesn't hold water.
For one thing, you're using enormous amounts of MM fiber to get at best 1/4 of the ports than you previously had.
The best case is that you could get 12 ports where you used to have 48, but that's messy.
The second issue is cost, if you're running and distance, you've got to go to OM4, because MM fiber has very limited range at 10G (you're multiplexing 10G links), and OM4 is insanely expensive.
Single Mode on the other hand is 'cheap' in comparison. One pair of SM fiber will handle every speed from 10M to 100G, and over much longer distances than MM, no matter what grade.
Unfortunately, since the manufacturers haven't seen fit to push the SM, the optics are extremely expensive, so we're stuck with 4-12 times the amount of installed fiber than we really need.
That was the reason for the push to the 10x10 MSA by people like Google
and other providers who did not want to use MM bundles and didn't want to
deal with the expense and power consumption of 100GBase-LR4. LR10
although hasn't really seen much adoption by the vendors, only compatible
optics from 3rd party vendors are available now.
40GBase-LR4 QSFP+ aren't really all that expensive these days. Gray
market they are less than $2500.
Cisco and Arista also just came out with 40G running over a single duplex
MM fiber, 100M over OM3, and I expect the other datacenter vendors to
follow suit shortly.
As for 10GBase-T in a transceiver, I haven't seen that on anyone's
roadmap. It will probably come eventually but not for awhile.
As for 10GBase-T in a transceiver, I haven't seen that on anyone's
roadmap. It will probably come eventually but not for awhile.
It must exist, as there is this:
Nah that's a 10G-base-t pci express nic in a box. which is fine and
dandy for what it does but the phy doesn't fit in the power envelope or
footprint of an sfp+ transciever.
IIRC, it takes about 13W to maintain a 10GBASET connection. That's a lot of
power to drain from a tiny board that wasn't designed to supply such loads.
~tom
Pluggable SFP+ transceiver. There are plenty of fixed config 10GBase-T
devices out there. Power/space in a SFP+ package just isn't there yet.
Phil
As for 10GBase-T in a transceiver, I haven't seen that on anyone's
roadmap. It will probably come eventually but not for awhile.
+1. Cisco calls them Twinax, HP calls them DACs. I don't know what anyone else calls them as it hasn't come up in conversation for me.
Cisco appears to offer them in 1, 1.5, 2, 2.5, 3, and 5 meter passive, as well as 7 and 10 meter active. HP has them in 1, 3, 7, 10, and 15 meter; no idea what the passive/active breakdown might be (they don't appear to offer that information as freely). I've mostly used the 3-meter HP DACs so far, and I've been rather happy with them, particularly the cost savings under 2x 10gbit SFP+ fiber transceivers.
Tom,
I believe the newer 10GBase-T standard is between 1.5 and 4W per port depending on the cable length,
much better (colder!) than it was. You will also get slightly increased latency with 10GBase-T vs SFP+
Rather specifically, Twinax refers to cable with 2 center conductors in
it's foam or plastic insulator *both within the same shield* -- generally,
I think always, a balanced pair.
The big issue appears to be that these are not always "consistently
functional" crossing vendor lines (sometimes product lines within the
same vendor). There does not appear to be any standardization in
place. Not sure how much of this is picky vendor software looking for
"branded" marks in their transceivers (e.g., Cisco "service
unsupported-transceiver") versus true incompatibilities.
We have had issues in test cases crossing vendor lines (Cisco / Brocade
/ Dell / HP) with a "twinax" link that just simply won't work. If
anyone has a clear explanation or better understanding, I'm all ears.
Personal experience comes from only a few testbed cases.
Most of the switch vendors have an "official" compatibility list, but I've found that generally the most common compatibility issue is active vs passive twinax.
Brocade edge switches and nics are normally active only, which seems to come up a lot - because most short cables are passive unless they are brocade branded. >5m is normally the cutoff for passive twinax. Pretty much everything else I've encountered supports passive.
For a while, the intel x520 nics, which are very common, didn't support active connections - but they have since released firmware that fixes this problem.
Netapp's lower end gear doesn't support active twinax.
We've worked through the same issues with Brocade/Intel, although we found
that even though Brocade specs active only, our ICX switches don't reject
passive cables, although oddly the Intel branded passive cables show up as
UNSUPPORTED (but FCI and Molex ones from Digikey show up as the correct
length and correct type of cable).
If you do decide to go generic make sure you check the sizing. Maybe
Brocade SFP+ drive is weak but using some 28 AWG 5M cables we've seen it
takes a lot of errors. Switching to 26 AWG or 24 AWG solved the issue. I
suspect Brocade requires active just from their storage background.