Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all articles
Browse latest Browse all 6211

Poor Infiniband Performance MT25418 on ESXi 5.1.0 U2

$
0
0

I am using an HP c7000 enclosure with 4 BL685c G1 blades, and a HP 4x DDR IB Swith Module. The blades are running Vmware ESXi 5.1.0 U2 . I am using the Mellanox ofed stack (  MLNX-OFED-ESX-1.8.1.0 ) and Mellanox driver mlx4_en-mlnx-1.6.1.2 . The addapters are recognized by the Vmware as follows :

 

 

 

 

# esxcli network nic list | grep 10G

vmnic_ib0  0000:047:00.0  ib_ipoib  Up    20000  Full    00:23:7d:94:d8:7d  2044 Mellanox Technologies MT25418 [ConnectX VPI - 10GigE / IB DDR, PCIe 2.0 2.5GT/s]

vmnic_ib1  0000:047:00.0  ib_ipoib  Up    20000  Full    00:23:7d:94:d8:7e  2044 Mellanox Technologies MT25418 [ConnectX VPI - 10GigE / IB DDR, PCIe 2.0 2.5GT/s]

 

 

 

 

I am also using a Topsin 120 switch with Subnet Manager running on it, the MTU in the vSwitch and portgroup is set to 2044, the switch is capable of handling 2048. So MTU-wise all is ok.

 

 

 

 

And the thing that botheres me is that when measuring the speed between 2 nodes with iperf I get speeds around 4-5 Gbits/sec . And to be honest with 4xddr I was expecting something in the ballpark of 7-8-9 Gbits/sec .

 

 

 

 

Is there something that I am doing wrong ?

 

 

 

 

Any input will be appreciated.

 

 

 

 

Thanks in advance.


Viewing all articles
Browse latest Browse all 6211

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>