FCoE Deployment

Does anyone know of any company or organization deploying FCoE[1] in a
production environment? I'm curious how widely adopted this technology is.

[1] http://en.wikipedia.org/wiki/Fibre_Channel_over_Ethernet
    http://fcoe.com/

Thanks,

* Jack Morgan

Does anyone know of any company or organization deploying FCoE[1] in
a production environment? I'm curious how widely adopted this
technology is.

FCoE was until very recently the only way to do centralized block
storage to the Cisco UCS server blades, so I'd imagine it's quite widely
adopted. That said, we don't run FCoE outside of the UCS �black box� -
its uplinks to the SAN are just regular FC.

I do
what would you like to know

FCoE was until very recently the only way to do centralized block storage
to the Cisco UCS server blades, so I'd imagine it's quite widely adopted.
That said, we don't run FCoE outside of the UCS <black box> - its uplinks
to the SAN are just regular FC.

Agreed, very much the only implementation I have come across FCoE installations for is Cisco UCS chassis. Personally, it's not something that I have seen regularly adopted as of yet outside proprietary hardware configurations such as UCS deployments.

Certainly also keen to understand as to any other use cases and deployments others have implemented using full-blown FCoE.

Kind regards,

Pierce Lynch

yep, we're doing FCoE in an EMC Symettric, ESX, Nexus environment. All of our FCoE is over 10gb CNAs. We are having good results, though we hit an odd bug on QLogics cards initially on pur HP DL 580s (affected twinax only -- if you dropped on uplink , ie testing failover, throughput dropped to crap for disk i/o., that issue only cleared after a power cycle of the server. Qlogic never fixed the issue so we RMAd nearly 30k in 10gb CNA cards and swapped everythibg to fiber. So beware of that. I can get more detail on the affected cards,etc tomorrow if interested. Our entire production VMware environment is off that setup.

David

our reason btw was to cut down on cabling/switch costs, it starts to add up when you consider how many blades get eated by 1gb copper. going to DL580s amd a few hp chassis. A chassis used to eat nearly 64 copper 1gb and 32 fiber channel connections. on FCoE/CNAs, we're literally talking 4 x 10gb cables (16 blades).

David

Hey everyone,

I've not forgotten about this. I plan on writing up the detail on our FCoE
experiences on Monday and will post it here. Have a few things going on
today and this weekend that are going to prevent me from keeping focused on
this.

David.