Hi,
We have been searching for a high-performance solution for forwarding traffic from an existing Gigabit Ethernet network directly onto a PCI accelerator card (Xeon Phi) installed in a host system, bypassing system CPU and memory (and obviously the system network stack). As Xeon Phi and its system software currently provide RDMA capabilities for Mellanox-only HCA adaptors, we were thinking of solutions like ConnectX-3 Pro adaptors that appear to be interoperable with Ethernet switches. Since we are totally newcomers to the Infiniband world, we have some basic questions to understand whether these adaptors can fit our needs:
1) To enable the HCA connectivity with the existing Ethernet switch, do we have to use some kind of Eth-IB bridging or something like that, or is the specific adaptor supposed to connect directly somehow to the switch?
2) What does exactly “Ethernet connectivity” mean? Does it mean that the HCA will perform eth-to-IB transformations as ethernet packets come in, e.g. by packaging them and then having them being pulled by some client (Xeon Phi, in our case) with IB semantics? Or does it mean that it simply acts as a regular ethernet interface all the way up to the network stack? In the latter case, how could be the RDMA capabilities leveraged?
Thanks in advance,
V.