Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all articles
Browse latest Browse all 6211

Feasibility of using Connect_X3 in HP SL230c

$
0
0

I am trying to get some insights on how to maximize I/O performance bexond 2x10 GbE per server for HP SL230c servers.

 

Since one 10GbE leg will be tied to a storage network, and since I expect to have to run I/O rich workloads, I aim for additional I/O capacity beyond just one single extra 10GbE.

 

Thus I am considering to either use Mellanox mellanox.com/related-docs/p  or Interface Masters low Profile Version of the Niagara 32714L NIC, Niagara 32714L - Quad Port Fiber 10 Gigabit Ethernet NIC Low Profile PCI-e Server Adapter Card

 

However, several questions arise

a) Is any of the Connect_X3 cards low Profile enough to fit and work in HP SL230c servers?

b) If yes, is it the single 1G/10G/40G, or also the dual 1G/10G/40G

c) If I make the servers mostly deal with I/O, how many 10G can a HP SL230c dual 8-core saturate - or do I already hit PCIe bus limits with the second 10 GbE?

d) The original HP 2x10GbE cards use approx. 11W - which energy needs do I have to expect for the single and dual port Mellanox cards, since I certainly want to avoid getting into any thermal trouble

e) What kind of performance have people achieved for VM to VM DC traffic in a DC under OpenStack/KVM using the Mellanox SR-IOV implementation (is there any such performance data available anywhere regarding SR-IOV - even if it may be the usual VMware stuff)?

f) How can SR-IOV enabled ports and their bandwidth be discovered in OpenStack when creating networks between VMs?


Viewing all articles
Browse latest Browse all 6211

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>