Quantcast
Channel: Mellanox Interconnect Community: Message List
Browsing all 6211 articles
Browse latest View live

Re: Poor infiniband performance on Vmware esxi 5.1

A few ideas where to look: 1.  Most likely you do not have 4K MTU set on the IB fabric itself. You need to make sure opensm is set with MTU 4K, it is likely set to 2044 – default. If you just have one...

View Article


Image may be NSFW.
Clik here to view.

Re: Voltaire 400EX Firmware

It is a very old  DDR (or SDR?) card - last fw ver.1.2.000 was released back in 2006.And here are realease notes for fw 1.2.000 on this cards:...

View Article


56 GbE over a ConnectX-3 VPI adapter

Hello, I have a pair of ConnectX-3 VPI adapters (MCX354A-FBCT) in Ethernet mode and am attempting to test out 56 GbE connectivity by directly connecting the adapters to one another via an FDR cable...

View Article

Re: Poor infiniband performance on Vmware esxi 5.1

1. I do have a partition with a partitions.conf with the following content : Default=0x7fff,ipoib,mtu=5:ALL=full;2. By specifications the HP DDR 4x IB Switch Module supports 4k MTU, but it's not really...

View Article

Re: 56 GbE over a ConnectX-3 VPI adapter

Hi Thomas, In order to link at 56GbE in a back-2-back setup, you need to change the HCA ini file and re-burn the firmware. 1. Download the .mlx file (fw code file)  and ini file ( configuration file...

View Article


Server 2012r2 ConnectX2 OpenSM

I have a MNPH29D in three machines two of which are running server 2012 r2. I tried to create the OpenSM server using the powershell command" New-Service –Name "OpenSM" –BinaryPathName "`"C:\Program...

View Article

Re: 56 GbE over a ConnectX-3 VPI adapter

eddie.notz, Great, thanks for the quick and detailed response! I have created the new firmware, but will wait until the end of the day when the servers are not in use to attempt burning it. Regards,Thomas

View Article

RHEL7 and iser using targetcli

All of the guides that I'm finding for redhat's iser are based on tgtd, but RHEL7 is now defaulting to targetcli.For example, the document:HowTo Configure iSER Transport (iSCSI over RDMA) (which is:...

View Article


Image may be NSFW.
Clik here to view.

ConnectX-3 Pro VXLAN Performance Overhead

Hi, I'm testing out ConnectX-3 Pro with VXLAN Offload in our lab. Using a single-stream iperf performance test, we get ~34Gbit/s transfer speed of non-VXLAN transport, but only ~28Gbit/s with VXLAN...

View Article


Re: Hyper-V vSwitch speed problems

Ok, so just an update with continuing problems.  I tried using the CX-2 cards in ethernet mode.  Still receiving appalling speeds.  Also tried the on board 1GBE NICs which were equally as slow....

View Article

Re: 56 GbE over a ConnectX-3 VPI adapter

eddie.notz, I flashed the cards today and they did establish link at 56 Gbps. They defaulted to IB, but changing the port type to eth worked. I am now seeing ~ 50 Gbps using MPI over RoCE. Thanks again...

View Article

Image may be NSFW.
Clik here to view.

Re: Server 2012r2 ConnectX2 OpenSM

HI, MNPH29D is an ethernet only ConnectX-2 NIC (SFP+ interface) it doesn't support InfiniBand. And that's why OpenSM won't start

View Article

Re: Lossless Ethernet for RDMA over Converged Ethernet (RoCE)

Hi Thomas,When the adapter needs more data, it fetches the data from the TX buffer. but if the port is in paused state, fetch won’t happen if no memory is available on the Tx buffer.Another issue, note...

View Article


When can we expect the OpenStack Cinder iSER patch for Grizzly

mellanox-openstack/cinder-iser · GitHub

View Article

Re: ConnectX-3 Pro VXLAN Performance Overhead

Hi Thorvald,Did you run this test VM to VM or within the hypervisor, I assume VM to VM.Is this only one flow (one VM) or more (several VMs on the same host)?What is the CPU that you are using? number...

View Article


Re: ConnectX-3 Pro VXLAN Performance Overhead

About PlumGrid: PlumGrid and Mellanox published a new white paper about creating a better network infrastructure for a large-scale OpenStack cloud by using Mellanox’s ConnectX-3 Pro VXLAN HW...

View Article

Image may be NSFW.
Clik here to view.

Re: RHEL7 and iser using targetcli

The tgtd daemon, aka scsi-target-utils, is still available via EPEL for RHEL 7 For LIO/targetcli "how-to", please take a look in this wiki:http://linux-iscsi.org/wiki/Main_Page For iSER:Linux SCSI...

View Article


Mellanox QSFP to SFP+ adaptor

I tried testing the Mellanx QSFP to SFP+ adaptor with the SFP side of a QSFP to 4xSFP+ copper cable .The result was what looks like corrupted/dead EEPROM  - unable to recognize the cable anywhere .Has...

View Article

Re: Mellanox QSFP to SFP+ adaptor

Do you use the QSA (QSFP -> SFP) adapter?Do you use Breakout-cable (4x SFP -> QSFP)?I'm not sure I understand the problem.Did you try to replace the cable/adapter and see ? Ophir.

View Article

Re: Mellanox QSFP to SFP+ adaptor

Could you send the part-numbers of the cables/adapters?

View Article
Browsing all 6211 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>