Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all articles
Browse latest Browse all 6211

Re: 40Gb/s IPoIB only gives 5Gb/s real throughput?!

$
0
0

Yes 40Gb/s data rate, but sending 8 bits data in a 10 bit packet giving 32Gb/s max data thruput; however the PCIe bus will limit you to about 25Gb/s.

Keep in mind that the performance for hardware to hardware is better than software to software.  I've only used Mellanox cards with Linux and the performance for hardware to hardware hits 25Gb/s with ConnectX-2 cards.

The IB equipment you are using has 4 pairs of wire running at 10Gb/s each - hence 40Gb/s total.

 

Real world file sharing, even with older 10Gb/s InfiniHost cards is better than 10Gb/s ethernet.  My MAXIMUM performance tests (using the Linux fio program) are below.  That being said we've avoided Windows file servers since at least Windows 2000 - the performance has been terrible compared to Linux; esp. when one factors in the cost of the hardware required.

 

I would suggest that you compare the exact servers using an ethernet link to see how it compares.  In the end theoretical performance is nice - but what really matters is the actual software you are using.  In my case going to 10Gb ethernet or QDR IB things like data replication (ZFS snapshots, rsync) went from 90 minutes to sub 3 minutes.  It was often not the increased bandwidth but the lower latency (IOPs) that mattered.  For user applications accessing the file server - compile times were only reduced by about 30% going to InfiniBand or 10Gb ethernet - but the ethernet is around 10x as expensive.  I've not performance tested our Oracle database - but it went to 10Gb ethernet because my IB setup is for the students and I don't trust it yet on a "corporate" server.

 

In the case of file sharing you'll want to see if you're using the old ports 137 to 139 instead of 445 as that can impact performance.

 

Also - there is no way to exploit the exceptionally low latency of InfiniBand unless you've got SSDs or your data in RAM.

 

 

NetworkGB Data
in 30 sec
Aggregate
Bandwidth (MB/s, Gb/s)
Bandwidth
(MB/s, Mb/s)
latency (ms)iops
QDR IB 40Gb/s
NFS over RDMA
943,100, 25802, 6.40.615 12,535
DDR IB 20Gb/s
NFS over RDMA
24.4834, 6.7208, 1.72.43256
SDR IB 10Gb/s
NFS over RDMA
22.3762, 6.1190, 1.52.572978
QDR IB 40Gb/s16.7568, 4.5142, 1.13.42218
DDR IB 20Gb/s13.9473, 3.8118, 0.944.11845
SDR IB 10Gb/s13.8470, 3.8117, 0.944.21840
10Gb/s ethernet5.9202, 1.651, 0.419.7793
1Gb/s ethernet3.2112, 0.902817.8438
100Mb/s ethernet346MB11.52.917445
10Mb/s ethernet via switch36MB1.2279kB/s17974
10Mb/s ethernet via hub33MB1.0260kB/s19204

Viewing all articles
Browse latest Browse all 6211

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>