Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all articles
Browse latest Browse all 6211

Need help getting full rate from dual 40GbE network cards on linux systems

$
0
0

Good afternoon,

 

 

 

I am having a problem with our network configuration. We are trying to achieve an overall bandwidth throughput in the neighborhood of 80 Gbps.  We are using dual 40 Gbps Mellanox NIC cards (MCX314A-BCBT) and have two separate 40 Gig Mellanox network switches, one for each network without any cross connections.  Every computer's 1st 40 Gb NIC on the MCX314A-BCBT card is connected to Network Switch #1, and their 2nd NIC is connected to Network Switch #2, We have tried various configurations without any success.  We are using iperf as a test tool.  On the server/receiving machine we have 2 of the 40 Gb NIC MCX314A-BCBT cards.  Every card is plugged into a PCIe 3.0 x8 slot.

 

 

Configuration #1:

 

  • Host A is iperf client on Network #1, Host C is iperf server on Network #1 on Card #1.  37.5 Gbps
  • Host B is iperf client on Network #2, Host C is iperf server on Network #2 on Card #1.  37.5 Gbps
  • The previous two steps were run in serial
  • When we run the identical tests with the two clients in parallel, we drop down to 13-15 Gbps on each link.
  • In this configuration we expected to see an overall combined rate approaching 60ish Gbps,, where the max theoretical PCIe 3.0 x8 is around 63 Gbps, but we are actually seeing the combined rate is less than the individual single rate.

 

 

Configuration #2:

 

  • Host A is iperf client on Network #1, Host C is iperf server on Network #1 on Card #1.  37.5 Gbps
  • Host B is iperf client on Network #1, Host C is iperf server on Network #1 on Card #2.  37.5 Gbps
  • The previous two steps were run in serial
  • When we run the identical tests with the two clients in parallel, we drop down to 17-18 Gbps on each link.
  • In this configuration we expected to see an overall combined rate approaching 75 Gbps (2x37.5), where the max theoretical PCIe 3.0 x8 throughput is around 126 Gbps across the two slots, but we are actually seeing the combined rate is less than the individual single rate.

 

 

To rule out the switch as a potential bottle neck we ran:

 

Configuration #3:

 

  • Host A is iperf client on Network #1, Host E is iperf server on Network #1 on Card #1.  37.5 Gbps
  • Host B is iperf client on Network #1, Host F is iperf server on Network #1 on Card #1.  37.5 Gbps
  • The previous two steps were run in serial
  • When we run the identical tests with the two clients in parallel, we got 37.5 Gbps on each link.
  • This configuration validates the the switches can truly route the traffic at rate to different destinations on the same network.

 

 

We have tried two different machines as the "servers", a machine with a single 4th Gen Hazwell processor and 32 GB of RAM, and another machine with two 10 core each Xeon processors and 64 GB of RAM. We are running Fedora 20 on all of the machines except the Xeon box, on which we are running RHEL 7.0.

 

Can anyone help us shed some light on why we are unable to achieve to 80ish Gbps throughput we expected?

 


Viewing all articles
Browse latest Browse all 6211

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>