Does anyone here have experience running copper 10Gbase-T networks? It
seems like the standard just died out. For us it would make a lot of sense
for our applications -- even if throughput and latency aren't as great. If
anyone out there knows of any *copper* 10 gig-t switches (48 port?), I'd be
interested to hear your experiences. I can't seem to find any high-density
ones from major vendors.
Does anyone here have experience running copper 10Gbase-T networks? It
seems like the standard just died out. For us it would make a lot of sense
for our applications -- even if throughput and latency aren't as great. If
anyone out there knows of any *copper* 10 gig-t switches (48 port?), I'd
be
interested to hear your experiences. I can't seem to find any high-density
ones from major vendors.
Is there something unique about your environment that wouldn't allow you
to use 10gbit SFP+-based switches with DAC (Direct Attach Copper) cables?
Those seem fairly well supported.
Mostly backwards compatibility; simplicity. We're planning for some
super-high-density virtualization/storage projects mixed in with lower
bandwidth gear, and sticking to one type of cable for everything would be
convenient. I thought DAC had some distance limitations as well.
This is all speculation though, I don't have any personal experience with
the 10Gbase-T stuff either. I have no idea what to expect performance-wise.
Gotcha. With SFP+ I think the only nod to backward compatibility would
be 1gbit RJ-45 SFPs, which can get a little spendy in large numbers
(although so can DACs).
As for distance, I admit I haven't encountered any DACs longer than 15
meters (~49 feet) -- not that I'm positive they don't exist.
10Gbase-T doesn't make much sense for a new virtual environment. Once you factor in the cost of the cabling and power, you probably would have been better off with DAC or FET interconnects. Also 10Gbase-T does not necessarily work with Legacy wiring, depending upon how it was run. Large bundles of wire cause crosstalk issues on legacy cabling, this is the reason for large jackets on 6A. http://www.siemon.com/us/learning/alien-crosstalk-guide.asp
I'm not saying it won't work for your scenario as I am not familiar with your environment, just keep it in mind that with most environments, DAC is a cheaper and provides better latency for your storage traffic.
Does anyone here have experience running copper 10Gbase-T networks? It
seems like the standard just died out.
Well, our new supermicro servers come with 10Gbase-T standard on
the motherboard.
For us it would make a lot of sense
for our applications -- even if throughput and latency aren't as great. If
anyone out there knows of any *copper* 10 gig-t switches (48 port?)
Also, IBM G8364 (uses Broadcom Trident merchant silicon).
I believe the Force10 S4810 (also Broadcom Trident) is only SFP+?
Intel will force 10GBASE-T on all of us since they can make it backwards
compatible with 1000BASE-T. I think this will make the technology take off
over the next year or so.
Been very happy running SFP+ twinax but sometimes I do wish I could go
further than 5/7/8.5 meters.
Does anyone here have experience running copper 10Gbase-T networks?
Yes.
> It
seems like the standard just died out. For us it would make a lot of sense
for our applications -- even if throughput and latency aren't as great. If
anyone out there knows of any *copper* 10 gig-t switches (48 port?), I'd be
interested to hear your experiences. I can't seem to find any high-density
ones from major vendors.
Well, I'm not sure about 48 port. I have several of these:
It was really unfortunate of Intel to release Romley with 10G copper only support at launch, I hear though that soon there will be motherboards with the SFP+ ports integrated.