Product Information
Mellanox ConnectX-4 Single Port, EDR, VPI QSFP28 Low Profile Adapter, Customer Install.
- 100 Gbps Virtual Protocol Interconnect (VPI) adapter
ConnectX-4 offers the highest throughput VPI adapter, supporting EDR 100 Gbps InfiniBand and 100 Gbps Ethernet and enabling any standard networking, clustering, or storage to operate seamlessly over any converged network leveraging a consolidated software stack. - Coherent Accelerator Processor Interface (CAPI)
ConnectX-4 enabled CAPI provides the ultimate performance for Power and OpenPower based platforms. Such platforms benefit from better interaction between the Power CPU and the ConnectX-4 adapter, lower latency, higher efficiency of storage access, and better Return on Investment (ROI), as more applications and more Virtual Machines run on the platform. - I/O virtualization
ConnectX-4 SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. I/O virtualization with ConnectX-4 gives data center administrators better server utilization while reducing cost, power, and cable complexity, allowing more Virtual Machines and more tenants on the same hardware. - Overlay networks
ConnectX-4 provides advanced NVGRE and VXLAN hardware offloading engines that encapsulate and de-capsulate the overlay protocol headers, enabling the traditional offloads to be performed on the encapsulated traffic. With ConnectX-4, data center operators can achieve native performance in the new network architecture. - HPC environments
ConnectX-4 delivers high bandwidth, low latency, and high computation efficiency for the High Performance Computing (HPC) clusters. Collective communication is a communication pattern in HPC in which all members of a group of processes participate and share data. - RDMA and RoCE
ConnectX-4, utilizing IBTA RDMA (Remote Data Memory Access) and RoCE (RDMA over Converged Ethernet) technology, delivers low-latency and high-performance over InfiniBand and Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as ConnectX-4 advanced congestion control hardware mechanisms, RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks. - Mellanox PeerDirect
PeerDirect communication provides high efficiency RDMA access by eliminating unnecessary internal data copies between components on the PCIe bus (for example, from GPU to CPU), and therefore significantly reduces application run time. ConnectX-4 advanced acceleration technology enables higher cluster efficiency and scalability to tens of thousands of nodes. - Storage acceleration
Storage applications will see improved performance with the higher bandwidth EDR delivers. Moreover, standard block and file access protocols can leverage RoCE and InfiniBand RDMA for high-performance storage access. A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. - Distributed RAID
ConnectX-4 delivers advanced Erasure Coding offloading capability, enabling distributed RAID (Redundant Array of Inexpensive Disks), a data storage technology that combines multiple disk drive components into a logical unit for the purposes of data redundancy and performance improvement.