Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all articles
Browse latest Browse all 6211

Performance of 40GbE NICs

$
0
0

Hi,


I have two machines, each with 4 40GbE NICs:

 

# lspci | grep -i mellanox

02:00.0 Ethernet controller: Mellanox Technologies MT27500 Family [ConnectX-3]

07:00.0 Ethernet controller: Mellanox Technologies MT27500 Family [ConnectX-3]

84:00.0 Ethernet controller: Mellanox Technologies MT27500 Family [ConnectX-3]

85:00.0 Ethernet controller: Mellanox Technologies MT27500 Family [ConnectX-3]

 

[   15.160714] mlx4_core: Mellanox ConnectX core driver v2.2-1 (Feb, 2014)

[   40.463976] mlx4_en: Mellanox ConnectX HCA Ethernet driver v2.2-1 (Feb 2014)

[   40.588902] <mlx4_ib> mlx4_ib_add: mlx4_ib: Mellanox ConnectX InfiniBand driver v2.2-1 (Feb 2014)

 

I bond the four interfaces on both of the machine to form a single interface, bond0. The issue here is that running iperf between the two bonded interface gives a poor performance of ~2 Gbits/sec. My sysctl.conf reads:

 

net.core.rmem_max = 56623104

net.core.wmem_max = 56623104

net.core.rmem_default = 56623104

net.core.wmem_default = 56623104

net.core.optmem_max = 56623104

net.ipv4.tcp_rmem = 4096 87380 56623104

net.ipv4.tcp_wmem = 4096 65536 56623104

 

I also upped the ring parameters to RX (8192) and TX (8192), but to no effect. Am I missing some obvious setting here? Both machines have 1 TB RAM and sufficient processing power. I should mention that even without bonding, I get the same performance.


Viewing all articles
Browse latest Browse all 6211

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>