Search
StarWind is a hyperconverged (HCI) vendor with focus on Enterprise ROBO, SMB & Edge

RDMA: A Deep Dive into Remote Direct Memory Access

  • November 7, 2024
  • 18 min read
StarWind Virtual SAN Product Manager. Dmytro is an expert in virtualization and storage systems, specializing in their design, deployment, and optimization.
StarWind Virtual SAN Product Manager. Dmytro is an expert in virtualization and storage systems, specializing in their design, deployment, and optimization.

High latency and CPU bottlenecks are key challenges to businesses dealing with large data sets and complex processes. So, how can we solve this problem? Remote Direct Memory Access (RDMA) is a technology that enables data transfer without involving the CPU, significantly reducing latency and CPU usage. But what is RDMA, and how does it function?

What Is RDMA (Remote Direct Memory Access)?

RDMA is a technology that enables direct data transfers between the memory of two devices, bypassing the CPU, cache, and operating system. This direct memory-to-memory communication accelerates data transfer rates, reduces latency, and decreases CPU workload, enhancing overall system performance — particularly in high-speed networking environments.

Initially developed in the 1990s, RDMA was aimed to address the inefficiencies in traditional data transfer, allowing direct access to memory across multiple nodes in a network. Key developments included the introduction of InfiniBand, a high-throughput, low-latency networking standard that popularized RDMA, and subsequent integration into other network fabrics like Ethernet (via RoCE — RDMA over Converged Ethernet) and iWARP (Internet Wide Area RDMA Protocol). Over time, RDMA has become a critical component in various applications, including data centers, cloud computing, and storage solutions, due to its ability to enhance performance and reduce resource utilization.

How RDMA Works

For RDMA to operate, it is necessary to implement a transport protocol in the network interface controller (NIC), such as RoCE and InfiniBand, located on each communicating device. Through a process called Zero-copy networking, these NICs can read from and write data to the memory of a remote machine without the intervention of the remote machine’s CPU. This not only reduces latency and CPU load but also significantly improves data throughput.

Benefits of RDMA

RDMA offers several significant advantages that make it a preferred technology for high-performance data transfer in networking environments. Here are the key benefits:

  • Low Latency: RDMA enables direct memory-to-memory communication, bypassing the traditional network protocol stack and reducing the need for CPU intervention. This results in significantly lower latency, making RDMA ideal for applications requiring real-time data processing, such as financial trading and high-performance computing.
  • High Throughput: By allowing data to be transferred directly between memory regions, RDMA achieves much higher throughput compared to conventional data transfer methods. This is especially beneficial in environments where large volumes of data need to be moved quickly, such as in data centers and cloud computing.
  • Reduced CPU Usage: Since RDMA offloads data transfer tasks to RDMA-capable NICs, it minimizes CPU involvement in the process. This frees up CPU resources for other critical tasks, enhancing overall system performance and efficiency, particularly in workloads that are heavily dependent on data movement.
  • Scalability: RDMA supports efficient communication across many nodes without a significant increase in latency or CPU usage. This makes it highly scalable and suitable for growing data centers and large-scale distributed computing environments.
  • Improved Application Performance: Applications that rely on frequent data exchanges between servers, such as databases, storage solutions, and virtualization platforms, benefit from RDMA’s ability to reduce the time and resources required for data transfer, resulting in enhanced performance and responsiveness.
  • Energy Efficiency: By reducing CPU workload and optimizing data transfer processes, RDMA can contribute to lower power consumption in data centers, making it a more energy-efficient choice for high-performance networking.

RDMA Use Cases

RDMA’s ability to deliver low latency, high throughput, and reduced CPU usage makes it particularly advantageous in several high-demand scenarios. Here are some key use cases where RDMA excels:

  • High-Performance Computing (HPC): In HPC environments, RDMA is widely used to enhance the performance of compute clusters by enabling fast and efficient data transfers between nodes. This is critical in applications like scientific simulations, weather modeling, and computational fluid dynamics, where large datasets need to be processed and exchanged rapidly.
  • Financial Services: RDMA is highly valuable in financial services, especially in applications that require ultra-low latency, such as high-frequency trading (HFT). In these environments, milliseconds can make a significant difference in transaction outcomes.
  • Data Centers: In data centers, RDMA enhances the performance of storage networks, hyper-converged infrastructures, and cloud computing platforms. By integrating RDMA over Converged Ethernet (RoCE) or InfiniBand, data centers can achieve faster data transfers and reduce the load on CPUs, which is particularly beneficial in environments with high data throughput requirements.
  • Machine Learning and AI Workloads: RDMA is increasingly used in machine learning and AI environments where large-scale data training models require fast access to distributed datasets. RDMA minimizes data transfer times for distributed training models, accelerating results and enabling quicker iteration.
  • Virtualization and Cloud Services: RDMA plays a crucial role in virtualized environments and cloud services by improving the performance of virtual machine (VM) migrations and enhancing storage access speed. This helps reduce downtime and improve service delivery, making RDMA a key technology for cloud providers aiming to offer high-performance services to their customers.
  • Storage Solutions: RDMA is utilized in storage solutions to speed up access to remote storage devices, such as NVMe over Fabrics (NVMe-oF). By using RDMA, these storage networks can significantly reduce latency and improve data access speeds, which is essential for applications requiring quick data retrieval, such as real-time analytics and large-scale database management.

RDMA vs. Traditional Network Communication

RDMA stands out compared to traditional network communication methods due to its significant performance enhancements, reduced latency, and improved efficiency. Now, let’s look at a detailed comparison highlighting the key differences:

  • Performance: Traditional network communication relies heavily on the CPU and operating system to handle data transfers. This involves multiple steps such as protocol processing, packet assembly/disassembly, and error checking, which can introduce delays and consume substantial CPU resources. In contrast, RDMA offloads these tasks to RDMA-capable NICs, allowing data to move directly between the memory of different machines. This bypass of the CPU and OS reduces overhead, leading to much higher data throughput and overall system performance.
  • Latency Reduction: One of the most critical advantages of RDMA is its ability to significantly reduce latency. Traditional methods, such as TCP/IP, require multiple context switches between user space and kernel space, along with extensive protocol processing, which adds latency to each data transfer. RDMA eliminates these steps by providing direct memory access, resulting in near-zero latency communications. This makes RDMA especially suitable for latency-sensitive applications like high-frequency trading, real-time analytics, and HPC.
  • CPU Efficiency: In traditional networking, the CPU must manage data transfers, including copying data between application memory and network buffers. This not only consumes CPU cycles but also limits the amount of data that can be handled concurrently. RDMA, however, minimizes CPU involvement by allowing the NIC to manage the data transfer directly, freeing the CPU to perform other tasks. This increased efficiency is particularly beneficial in data centers and cloud environments where maximizing CPU availability is crucial for handling numerous concurrent workloads.
  • Protocol Overhead: Traditional network communications involve significant protocol overhead, which includes error correction, congestion control, and packet sequencing. RDMA, on the other hand, reduces protocol overhead by directly managing data placement with hardware-level control, ensuring reliable transmission with minimal additional processing.
  • Scalability and Flexibility: RDMA’s streamlined communication model allows it to scale more effectively than traditional methods, which can become bogged down by protocol overhead and CPU limitations as the number of nodes increases. RDMA supports large-scale, distributed environments without a corresponding increase in latency or CPU load, making it ideal for growing data centers and scalable cloud architectures.

RDMA Protocols and Standards

RDMA is implemented through various protocols and standards. Here’s an overview of the key RDMA protocols and their significance:

InfiniBand

InfiniBand is one of the most widely used RDMA protocols, particularly in high-performance computing (HPC) environments. It is a high-speed, low-latency networking standard designed to provide the fastest and most efficient data transfer capabilities available. InfiniBand natively supports server virtualization, overlay and software-defined networks, making it ideal for applications requiring rapid data movement between compute nodes, such as scientific simulations, big data analytics, and AI workloads.

RDMA over Converged Ethernet (RoCE)

RoCE is an RDMA protocol that allows RDMA’s capabilities to be used over Ethernet networks, providing a cost-effective and flexible solution for data centers. RoCE comes in two versions: RoCE v1, which operates at Layer 2 (link layer), and RoCE v2, which functions at Layer 3 (network layer), making it routable and more suitable for larger and more complex network topologies. RoCE brings the benefits of RDMA to Ethernet-based infrastructures, which are common in modern data centers.

iWARP

iWARP is an RDMA protocol designed to work over standard TCP/IP Ethernet networks. It encapsulates RDMA traffic within TCP/IP, enabling direct memory access without requiring specialized hardware or extensive modifications to existing network infrastructure. This approach makes iWARP a practical choice for environments that need RDMA’s benefits but also rely on conventional Ethernet networking. It uses a mix of layers, including direct data placement protocol. All these layers are used to deliver RDMA services over TCP/IP. iWARP allows RDMA to be seamlessly integrated into existing Ethernet infrastructures, making it easier to deploy in traditional data centers and enterprise environments.

What StarWind has to offer?

StarWind is at the forefront of integrating RDMA, providing exceptional performance and efficiency for data storage and management. By utilizing RDMA, StarWind Virtual SAN (VSAN) offloads data transfer tasks from the host CPU, significantly reducing latency and freeing up CPU resources for other critical operations. So, how does it do that?

StarWind VSAN supports iSER (iSCSI Extensions for RDMA) for its replication network, which extends the iSCSI protocol with RDMA capabilities. This allows for faster and more efficient synchronous replication across network by bypassing traditional TCP/IP bottlenecks, reducing CPU load, and minimizing latency.

StarWind VSAN also features NVMe-oF for its data and replication networks, enabling the use of RDMA to unlock the full potential of NVMe storage devices. NVMe-oF enhances data transfer speeds by allowing direct communication between the host and NVMe storage, which bypasses the traditional storage stack. This results in significantly improved performance, especially in high-throughput environments. In the next release, StarWind plans to extend NVMe-oF support to multi-node high-availability configurations, providing even greater flexibility and resilience for critical data operations.

Conclusion

Remote Direct Memory Access significantly enhances modern networking by enabling direct memory-to-memory communication, bypassing the CPU and OS to reduce latency, increase throughput, and lower CPU usage. Its protocols, including InfiniBand, RoCE, and iWARP, provide versatile solutions for various environments, from HPC to standard Ethernet networks.

StarWind leverages RDMA’s benefits with solutions like StarWind Virtual SAN, which utilizes iSCSI and NVMe over RDMA to deliver superior performance and efficiency in storage and networking. As a result, RDMA is a crucial technology for organizations seeking faster, more efficient data transfers in today’s high-performance environments.

Found Dmytro’s article helpful? Looking for a reliable, high-performance, and cost-effective shared storage solution for your production cluster?
Dmytro Malynka
Dmytro Malynka StarWind Virtual SAN Product Manager
We’ve got you covered! StarWind Virtual SAN (VSAN) is specifically designed to provide highly-available shared storage for Hyper-V, vSphere, and KVM clusters. With StarWind VSAN, simplicity is key: utilize the local disks of your hypervisor hosts and create shared HA storage for your VMs. Interested in learning more? Book a short StarWind VSAN demo now and see it in action!