Quantcast
Channel: Mellanox Interconnect Community: Message List
Viewing all 6211 articles
Browse latest View live

Up to $200 off! sfpcables lower cost shop for datacenter

$
0
0

SFPcables.com is an official store of 10Gtek company. It offers one-stop Datacenter solution and products, including compatible Transceivers, DAC cables, AOC cables, Fiber Optical, Patch-cord, CWDM/DWDM passive/active systems, Network Adapters and more. As a commitment to offering high-quality, reliable and customized products, SFPcables.com develops and tests new products to meet the growing Data Centers and High Performance Computing market needs. We guarantee only premium materials and quality engineering are used in the designing and manufacturing for the fiber and cooper products of SFPcables.com and the whole production process are under the highest quality control.
In addition to our extensive product line, our customer service has earned us a reputation of trust that is unparalleled in the industry. SFPcables.com holds thousands of cables and transceivers in stock to ensure 24-hour shipping for all orders.

 

 

SFPcables.com invests heavily in technology, with a Compatibility Test Lab full of the latest brands of equipment, SFPcables.com ensures its precise programming for various brands of switches, servers and routers.
We help clients navigate the complexities of their hardware architecture to guarantee compliance throughout the network.
Our service success is proven by the loyalty and support of some of the largest data centers in the world, including:telecommunications, corporations, government agencies and reputable distributors.

we have a feedback for our customer’s support. as you see on the pic.

up to $200 off !!!

Take more save more, sfpcables is a shop store of 10Gtek. we always have exhabition all over the world, all our products are with Certification. 100% high quality guaranteed.

 

 

Our special sell:

100G QSFP28 (EDR) DAC Cable

FDR 56GBASE-SR4 QSFP+ Transceiver

56Gb/s QSFP+ FDR DAC Cable


Re: How to configure host chaining for ConnectX-5 VPI

$
0
0

Just some due diligence here.

We put our ConnectX5 cards in our 3 host vmware 6.5 stack, and did not get it to work with host_chaining. We ended up contacting support about it, and the reply we got wasn't optimistic.

"Host-chaining is currently not supported as it is not GA for ESXi."

 

So my previous post was a grain of salt, and marked out accordingly.

 

I have yet to see *any* documentation on host_chaining specifically; which is really sad, since As far as I know, my post above is the best available.

Re: How to configure host chaining for ConnectX-5 VPI

$
0
0

You're welcome!

I'm glad I helped someone after all the headache I went through for it.

 

I have no hard experience with VMWare, and so take all of this with a grain of salt.

 

First thought is vlan tags. I was told that VMWare tags by default.

 

From my (limited) understanding and thoughts, host chaining inside VMware is not a good idea.

If you setup a virtual switch (on the vmware side) and put both ports of the card on the switch, give that switch an IP, that would allow for vmotion and such over the link at close to line speed. Letting the switch (analogous to openvswitch) do all of the routing, and fast pathing.

 

Thoughts - If there was host chaining:

Vmware still sees both ports (we can't assign IPs to raw port interfaces to start with.)

It doesn't really know which port to send out, so it could take the extra hop before it gets to the destination.

Three node, desired going from A -> B might take the path of A -> C -> B

 

Where I can talk is non-chaining speed.

We did try using openswitch and the cards with chaining off. So long as the stp stuff is turned on; we got nearly line speed.

 

We opened a support ticket for our problems with MTU. It took a while, but we found the problem.

They have a nice little utility (sysinfo-snapshot) for seeing the card internals and OS config options which helped us (by looking through it.)

 

See my post below. Host_chaining is not supported on ESXi at this time.

How can I enable "packet pacing" on connectX-5 ?

$
0
0

How can I enable "packet pacing" on the connectX-5 ? With the command ibv_devinfo -v I get Packet_pacing_caps: qp_rate_limits_max: 0kbps I expect here the wire rate of the card.

Thanks Patrick

Re: Can't ibping Lid or GUID but can ping by ip

$
0
0

Thank you for responding quickly.

 

I am able to ibping to the gid on first dev but not on the second one:

 

SERVER:

-----------------------

# show_gids

 

DEV     PORT    INDEX   GID                                     IPv4            VER     DEV

---     ----    -----   ---                                     ------------    ---     ---

mlx5_0  1       0       fe80:0000:0000:0000:248a:0703:0014:f9ac                 v1

mlx5_1  1       0       fe80:0000:0000:0000:248a:0703:0014:f850                 v1

n_gids_found=2

 

 

CLIENT:

name@server:/etc/infiniband$ ibping --dgid fe80:0000:0000:0000:248a:0703:0014:f9ac 8

Pong from centos-dgx1.brane.systems.(none) (Lid 8 Gid fe80::248a:703:14:f9ac): time 0.109 ms

Pong from centos-dgx1.brane.systems.(none) (Lid 8 Gid fe80::248a:703:14:f9ac): time 0.095 ms

Pong from centos-dgx1.brane.systems.(none) (Lid 8 Gid fe80::248a:703:14:f9ac): time 0.139 ms

Pong from centos-dgx1.brane.systems.(none) (Lid 8 Gid fe80::248a:703:14:f9ac): time 0.174 ms

Pong from centos-dgx1.brane.systems.(none) (Lid 8 Gid fe80::248a:703:14:f9ac): time 0.159 ms

Pong from centos-dgx1.brane.systems.(none) (Lid 8 Gid fe80::248a:703:14:f9ac): time 0.190 ms

Pong from centos-dgx1.brane.systems.(none) (Lid 8 Gid fe80::248a:703:14:f9ac): time 0.169 ms

Pong from centos-dgx1.brane.systems.(none) (Lid 8 Gid fe80::248a:703:14:f9ac): time 0.163 ms

^Z[6]   Killed                  ibping 8

[7]   Killed                  ibping -S

 

 

[8]+  Stopped                 ibping --dgid fe80:0000:0000:0000:248a:0703:0014:f9ac 8

name@server:/etc/infiniband$ ibping --dgid fe80:0000:0000:0000:248a:0703:0014:f850 8

ibwarn: [47999] mad_rpc_rmpp: _do_madrpc failed; dport (Lid 8 Gid fe80::248a:703:14:f850)

ibwarn: [47999] mad_rpc_rmpp: _do_madrpc failed; dport (Lid 8 Gid fe80::248a:703:14:f850)

ibwarn: [47999] mad_rpc_rmpp: _do_madrpc failed; dport (Lid 8 Gid fe80::248a:703:14:f850)

ibwarn: [47999] mad_rpc_rmpp: _do_madrpc failed; dport (Lid 8 Gid fe80::248a:703:14:f850)

ibwarn: [47999] mad_rpc_rmpp: _do_madrpc failed; dport (Lid 8 Gid fe80::248a:703:14:f850)

ibwarn: [47999] mad_rpc_rmpp: _do_madrpc failed; dport (Lid 8 Gid fe80::248a:703:14:f850)

ibwarn: [47999] mad_rpc_rmpp: _do_madrpc failed; dport (Lid 8 Gid fe80::248a:703:14:f850)

^Z

[9]+  Stopped                 ibping --dgid fe80:0000:0000:0000:248a:0703:0014:f850 8

 

 

How can ibping the other gids?

 

 

 

Thanks

Brian

LLDP via SNMP Windows 2012 Server Ethernet NIC

$
0
0

i have a Windows server 2012 R2 machine with one of your Mellanox NIC and I want to be able to access LLDP information via SNMP, do you supply a MIB for this purpose or is there another way to get this information?

mlx5 IPoIB not working in connected mode

$
0
0

Hello,

I am trying to have my hosts connected on my infiniband network with mlx5 cards in connected mode but IPoIB is not working.

CONNECTED MODE is mandatory in my environment.

 

I disabled IPoIB enhanced mode

 

options ib_ipoib ipoib_enhanced=0

 

in this way I configured ib0 as a connected mode IPoIB

 

ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65520 qdisc pfifo_fast state UP group default qlen 256

    link/infiniband 80:00:00:86:fe:80:00:00:00:00:00:00:50:6b:4b:03:00:42:e7:b4 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff

    inet 172.21.52.144/22 brd 172.21.55.255 scope global ib0

 

 

while ibping works, ping itself does not work and I Am not able to use the interfaces.

 

server 1: 172.21.52.144

server 2: 172.21.52.145

 

they cannot ping each other while they can ibping each other.

 

my systems are RHEL 7.5  3.10.0-862.11.6.el7.x86_64

 

here is the info of my mlx5 card.

how can I enable connected mode to work on these interfaces?

 

CA 'mlx5_0'

    CA type: MT4115

    Number of ports: 1

    Firmware version: 12.23.1020

    Hardware version: 0

    Node GUID: 0x506b4b030042e7b4

    System image GUID: 0x506b4b030042e7b4

    Port 1:

        State: Active

        Physical state: LinkUp

        Rate: 100

        Base lid: 3

        LMC: 0

        SM lid: 1

        Capability mask: 0x2659e848

        Port GUID: 0x506b4b030042e7b4

        Link layer: InfiniBand

 

thank you

Here free to take 10 OM3-LC-LC-D.

$
0
0

if you need purchase some optical items. you may have a check. factory outlets! 100% quality certification!

10Gtek.com

even save up to $200 off!

 


Re: mlx5 IPoIB not working in connected mode

$
0
0

Hello Riccardo,

Thank you for posting your question on the Mellanox Community.

As you also opened a support case with use regarding this issue, we will continue to update you through the support case.

Thanks and regards,
~Mellanox Technical Support

ConnectX-4 in IB mode in ESXi

$
0
0

Hi.
Can I connect InfiniBand Adapter in ESXi 6.X in IB mode? Not ethernet.
So that, VM can work with this adapter.
I can't find information.
Or, which virtualization better use for this task?

Hardware support for PTP in ConnectX-5

Let me know which os is stable for SB7800

$
0
0

Hello

 

Recently delivered sb7800 to our customers.

 

The customer requested that the SB7800 OS be installed with the most stable version of OS and wants to be recommended by the vendor.

 

Please advise me of the most stable os of the sb7800 devices that have come to date.

 

Thank you.

Re: Let me know which os is stable for SB7800

$
0
0

Hi Shin,

 

The latest Mellanox OS available for SB7800 is 3.6.8010

You can find the image on myMellanox.

 

Thanks,

Pratik Pande

When will an ACK generated in RDMA write?

$
0
0

Hi,

Recently, I am doing some RDMA write latency test with ConnectX-4 Lx 25G NIC.

And I have two questions about the testing:

  1. What is the version of RDMA? Is the default RoCEv2?
  2. By default, the RC(reliable) mode is chosen. As required, an ACK is needed from the remote to local, then the local will add an entry to the CQ and the software then know the data has arrived at the remote. I searched with google but didn't find when a MLNX NIC will generate the ACK. I mean, is it generated when the data received by the HCA or after DMA to the host memory? And I assume the ACK is automatically generated by the NIC adapter without any SW  involvement. Is my understanding right?

 

Many thanks

Re: ASAP2 Live Migration & H/W LAG

$
0
0

Hi Zhang,

 

I'm looking for the best way of using ASAP2 with OpenStack where NIC has bonded two ports.

 

Best regards,


How to enable the debuginfo for the libraries of OFED?

$
0
0

Hi,

I am debugging an issue and the function call tracing is needed. But with some debugging tools like perf, no symbol could be found. "nm" tool couldn't get any symbol.

 

I checked the mlnxofedinstall script and there is a list of the debuginfo files for CentOS, but in the OFED downloaded from the official website, no such package could be found and installed, even with the "--add-kernel-support". (e.g. the libs of mlx5 and ibverbs)

Several years ago, someone asked such question but no answer. So could some expert share the step to rebuild the packages with debuginfo to make the debugging convenient.

 

Many thanks

RDMA_CM_EVENT_ROUTE_ERROR

$
0
0

I am hosting an Infiniband server in a linux machine and also I have created a client and connected to that service in the same machine.This works fine most of the time. But in one instance when I was trying to connect to that server from the same client (there is no prior connectivity with that server), It is throwing RDMA_CM_EVENT_ROUTE_ERROR and the connection couldn't be established.

I don't know the root cause of this error and it is not 100% recreatable. This made my application unreliable. I want to know the root cause of it..

Re: MSX1012B MSX6012F

$
0
0

Hi

 

To answer the below:

1. Will this optical QSFP+ module (40GbE) work on each switch ? If so, on what port ? Or any port will be OK ?

Ans.  It will work on all ports.

 

2. What does mean the '' WT '' letters on the MSX6012 model ?  (Wide Transceiver ??) If so, does that mean that i need to buy an another MSX6012F_WT ? Or the MSX1012B will also work fine ?

Ans.  MSX6012 if by default an infiniband switch and MSX1012 is by default an ethernet switch.  If you have an ethernet license (which can be purchased from Mellanox) installed on the MSX6012 switch, and the profile is now ethernet, the modules should work with both switches.

 

3. Actually, my MSX6012F is in VPI profil mode. Do i need to put it in single_ethernet mode ? (i don't do infiniband on the network for now, but it is envisaged).

Ans.  I would suggest to change is to single_ethernet mode.

 

4. As our needs are evolving, and my two switches already have almost all their ports in use, I plan to buy two MSX6036F to replace the MSX1012B and MSX6012F. Will both MC2210511-LR4 modules will work and, if so, on which ports?

Ans.  It will work on ports 1,3,33,&35.

Re: ConnectX-2 on Ubuntu 18.04

$
0
0

Hello-

 

In case someday anyone else would like to try, we went ahead and upgraded to Ubuntu 18 and OFED 4, and our old ConnectX-2 IB cards are still working fine with this setup, despite the release notes indicating support was dropped.

 

-Lewis

Re: Hardware support for PTP in ConnectX-5

$
0
0

ConnectX-5 supports PTP.

The Release Notes wording should be fixed to explain the limitation better, it is referring to "service type" in the QP; this is an advanced feature that does not affect working with PTP today.

 

Sorry for the confusion.

 

Erez.

Viewing all 6211 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>