Anyone can share the Network card experience

Hi

Anyone can share the Network card experience

ls onborad PCI Expresscard better or Plug in slot PCI Express card good?

How are their performance in Gig transfer rate?

Thank you so much

It depends on the speed of the PCI slot. In saying that, you are only
trying to transfer 1Gb/s.
http://en.wikipedia.org/wiki/PCI_Express
Note the thoughts on there about full duplex..

"PCI Express 1.0a
In 2003, PCI-SIG introduced PCIe 1.0a, with a data rate of 250 MB/s
and a transfer rate of 2.5 GT/s."

"PCI Express 2.0
PCI-SIG announced the availability of the PCI Express Base 2.0
specification on 15 January 2007.[9] The PCIe 2.0 standard doubles the
per-lane throughput from the PCIe 1.0 standard's 250 MB/s to 500 MB/s.
This means a 32-lane PCI connector (x32) can support throughput up to
16 GB/s aggregate. The PCIe 2.0 standard uses a base clock speed of
5.0 GHz, while the first version operates at 2.5 GHz."

I can't give you practical advice, but its a good place to start your reading...

Cheers
Heath

the question of which is better, onboard vrs plug in would in part be determined by the type (make/model) of motherboard you are speaking of. How they have IRQs allocated (which is something you may be able to adjust), where it is attached to the bus etc… Also, what comes with the main board is what you get. You can purchase option NICs with extra processors (TOE for example) which offload your main CPU.

For 10Gbit we use Intel cards for production service machines, and ConnextX/Intel in the HPC cluster.

-g

For 10Gbit we use Intel cards for production service machines, and ConnextX/Intel in the HPC cluster.

Greg - I've not been exposed to 10G on the server side..
Does the server handle the traffic load well (even with offloading) -
that's a LOT of web requests / app queries per second!

Or are you using 10G mainly for iSCSI / file serving / static content?

Cheers

Hi,

most of our traffic is heading directly into memory, not hitting the local disks, on the HPC end of things. Our file servers are feeding the network with around 24 x 10Gibit (active/active clusters), and regularly run at over 80 percent on all ports during runs.. this is all HPC / file movement traffic. we have instruments which generate over 6TB of data per run, every 3 days, 7/365. we have about 20 of these instruments. so most of the data on 10Gbit is indeed static, or to/from a file server to/from HPC clusters.

iSCSI we run on its own network hardware, autonomous from the 'data' network. its not in wide deployment here, only the file server is connected via 10Gbit, the hosts using iSCIS (predominately KVM and Vmware clusters) are being feed over multiple 1Gbit links for their iSCIS requirements.

Our external internet servers are connected to the internet via 1Gbit links, not 10Gibt, but apparently that is coming next year. The type of traffic they'll see will not be very chatty/interactive. it'll be researchers downloading data sets ranging in size from a few hundred megs, to a few TB..

take care,
-g

Hi

Anyone can share the Network card experience

ls onborad PCI Expresscard better or Plug in slot PCI Express card good?

both are likely to be pci-e x1 interfaces if it's a single or dual port
chipset.

How are their performance in Gig transfer rate?

should be a 100% in an appropiately fast machine.

you'll find that most 4 port gig or 10gig cards have x4 or x8 connectors.