Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all articles
Browse latest Browse all 6211

Is this the best our FDR adapters can do?

$
0
0

We have a small test setup illustrated below. I have done some ib_write_bw tests. Got "decent" numbers, but not as fast as I anticipated.  First, some background of the setup:

 

ipoib_for_the_network_layout_after.png

 

Two 1U storage servers each has a EDR HCA MCX455A-ECAT. The other four each has a ConnectX-3 VPI FDR 40/50Gb/s HCA mezz OEMed by Mellanox for Dell.  The firmware version: 2.33.5040.  This is not the latest (2.36.5000 according to hca_self_test.ofed) but I am new to IB, and still getting up to speed with updating using Mellanox's firmware tools. The EDR HCA firmware has been updated when the MLNX_OFED was installed.

 

All servers:

CPU: 2 x Intel E5-2620v3 2.4Ghz 6 core/12 HT

RAM: 8 x 16GiB DDR4 1866Mhz DIMMs

OS: CentOS 7.2 Linux ... 3.10.0-327.28.2.el7.x86_64 #1 SMP Wed Aug 3 11:11:39 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

OFED: MLNX_OFED_LINUX-3.3-1.0.4.0 (OFED-3.3-1.0.4)

 

A typical ib_write_bw test:

 

Server:

[root@fs00 ~]# ib_write_bw -R

 

 

************************************

* Waiting for client to connect... *

************************************

---------------------------------------------------------------------------------------

                    RDMA_Write BW Test

Dual-port       : OFF Device         : mlx5_0

Number of qps   : 1 Transport type : IB

Connection type : RC Using SRQ      : OFF

CQ Moderation   : 100

Mtu             : 2048[B]

Link type       : IB

Max inline data : 0[B]

rdma_cm QPs : ON

Data ex. method : rdma_cm

---------------------------------------------------------------------------------------

Waiting for client rdma_cm QP to connect

Please run the same command with the IB/RoCE interface IP

---------------------------------------------------------------------------------------

local address: LID 0x03 QPN 0x01aa PSN 0x23156

remote address: LID 0x05 QPN 0x4024a PSN 0x28cd2e

---------------------------------------------------------------------------------------

#bytes     #iterations    BW peak[MB/sec]    BW average[MB/sec]   MsgRate[Mpps]

65536      5000             6082.15            6081.07   0.097297

---------------------------------------------------------------------------------------

 

Client:

[root@sc2u0n0 ~]# ib_write_bw -d mlx4_0 -R 192.168.111.150

---------------------------------------------------------------------------------------

                    RDMA_Write BW Test

Dual-port       : OFF Device         : mlx4_0

Number of qps   : 1 Transport type : IB

Connection type : RC Using SRQ      : OFF

TX depth        : 128

CQ Moderation   : 100

Mtu             : 2048[B]

Link type       : IB

Max inline data : 0[B]

rdma_cm QPs : ON

Data ex. method : rdma_cm

---------------------------------------------------------------------------------------

local address: LID 0x05 QPN 0x4024a PSN 0x28cd2e

remote address: LID 0x03 QPN 0x01aa PSN 0x23156

---------------------------------------------------------------------------------------

#bytes     #iterations    BW peak[MB/sec]    BW average[MB/sec]   MsgRate[Mpps]

65536      5000             6082.15            6081.07   0.097297

---------------------------------------------------------------------------------------

 

Now 6082MB/s ~ 48.65Gbps.  Even taking into account of the 64/66 encoding overhead, over 50+Gbps should be the case, or is this the best the setup can do?  Or is there anything I can do to push up the speed further?

 

Look forward to hearing the experience and observations from the experienced camp!  Thanks!


Viewing all articles
Browse latest Browse all 6211

Trending Articles