As a result of the NVIDIA co-development effort with Mellanox Technologies, Mellanox provides support for GPUDirect technology, that eliminates CPU bandwidth and latency bottlenecks using direct memory access (DMA) between GPUs and Mellanox HCAs, resulting in significantly improved RDMA applications such as MPI.
GPUDirect History:
The GPUDirect project - announced Nov 2009
- “NVIDIA Tesla GPUs To Communicate Faster Over Mellanox InfiniBand Networks”, http://www.nvidia.com/object/io_1258539409179.html
- GPUDirect - developed by Mellanox and NVIDIA
- New interface (API) within the Tesla GPU driver
- New interface within the Mellanox InfiniBand drivers
- Linux kernel modification to allow direct communication between drivers
GPUDirect 1.0 - announced Q2’10
- Accelerated Communication With Network And Storage Devices
- Avoid unnecessary system memory copies and CPU overhead by copying data directly to/from pinned CUDA host memory
- “Mellanox Scalable HPC Solutions with NVIDIA GPUDirect Technology Enhance GPU-Based HPC Performance and Efficiency”
- In first stages was available as separate Mellanox OFED GPUDirect package, with recent Linux kernels, the regular Linux MLNX_OFED is sufficient.
GPUDirect RDMA - Today
- Support in ConnectX-3 and K10, K20
- Alpha – available today, accessible for selected customers, please contact support@mellanox.com for details.
- GA - please contact support@mellanox.com for schedule.
GPUDirect RDMA:
Allows the HCA to directly zero-copy from/to the GPU memory, resulting in completely bypassing the host memory. This feature requires CUDA 5.0 or later, as well as MLNX_OFED package with the suitable hooks.
Frequently Asked Questions:
- Where can I have access to CUDA GPUDirect Peer-to-Peer (P2P) API?
You must use CUDA 5.0 or later, check NVIDIA developer guide for more details. - What cards are supported for GPUDirect RDMA?
HCA: ConnectX Family, GPU: Kepler Class - What Software components are GPUDirect-RDMA aware?
- OS: supported on Linux only; no changes required in the kernel.
- HCA Driver: you must use compatible MLNX OFED driver
- GPU Driver: use CUDA 5.0 or later
- RDMA Application:
- If you're using the RDMA verbs directly, then yes; the application should be aware of CUDA GPU allocations (for example, MPI layer should be GPUDirect-RDMA-aware)
- However, if you're using RDMA inderectly (for example, an MPI application running on top of MPI layer such as MVAPICH2), then there is no need for any changes in the application
- How should I install the CPU/GPU on my system to enable GPUDirect RDMA?
- From CUDA Toolkit Documentation webpage:
- We can distinguish between three situations:
- PCIe switches only
- single CPU/IOH
- CPU/IOH <-> QPI/HT <-> CPU/IOH
- The first situation, where there are only PCIe switches on the path, is optimal and yields the best performance. The second one, where a single CPU/IOH is involved, works, but yields worse performance. Finally, the third situation, where the path traverses a QPI/HT link, doesn't work reliably.
- Is there any documentation for GPUDirect RDMA?
Check NVIDIA CUDA Toolkit Documentation