Remote direct memory access
In computing, remote direct memory access (RDMA) is a direct memory access from the memory of one computer into that of another without involving either one's operating system. This permits high-throughput, low-latency networking, which is especially useful in massively parallel computer clusters.
Overview
RDMA supports zero-copy networking by enabling the network adapter to transfer data from the wire directly to application memory or from application memory directly to the wire, eliminating the need to copy data between application memory and the data buffers in the operating system. Such transfers require no work to be done by CPUs, caches, or context switches, and transfers continue in parallel with other system operations. This reduces latency in message transfer.
However, this strategy presents several problems related to the fact that the target node is not notified of the completion of the request (single-sided communications).
Acceptance
As of 2018 RDMA had achieved broader acceptance as a result of implementation enhancements that enable good performance over ordinary networking infrastructure [1]. For example RDMA over Converged Ethernet (RoCE) now is able to run over either lossy or lossless infrastructure. In addition iWARP enables an Ethernet RDMA implementation at the physical layer using TCP/IP as the transport, combining the performance and latency advantages of RDMA with a low-cost, standards-based solution.[2] The RDMA Consortium and the DAT Collaborative[3] have played key roles in the development of RDMA protocols and APIs for consideration by standards groups such as the Internet Engineering Task Force and the Interconnect Software Consortium.[4]
Hardware vendors have started working on higher-capacity RDMA-based network adapters, with rates of 100 Gbit/s reported.[5][6] Software vendors, such as Red Hat and Oracle Corporation, support these APIs in their latest products,[7] and as of 2013[update] engineers have started developing network adapters that implement RDMA over Ethernet.[8] Both Red Hat Enterprise Linux and Red Hat Enterprise MRG[9] have support for RDMA. Microsoft supports RDMA in Windows Server 2012 via SMB Direct. VMware's ESXi product also supports RDMA as of 2015.
Common RDMA implementations include the Virtual Interface Architecture, RDMA over Converged Ethernet (RoCE), InfiniBand, Omni-Path and iWARP.
References
- ^ RoCE Rocks over Lossy Network: https://dl.acm.org/citation.cfm?id=3098588&dl=ACM&coll=DL
- ^ "Understanding iWARP" (PDF). Intel Corporation. Retrieved 16 May 2018.
- ^ "DAT Collaborative website". Archived from the original on 17 January 2015. Retrieved 14 October 2014.
- ^ The Interconnect Software Consortium website Archived 2005-08-30 at the Wayback Machine
- ^ "Microsoft Based Solutions - Mellanox Technologies". Retrieved 14 October 2014.
- ^ "40Gbe SMB Direct RDMA Over Ethernet For Windows Server 2012 - Chelsio Communications". Retrieved 14 October 2014.
- ^ "What RDMA hardware is supported in Red Hat Enterprise Linux?".
- ^
"40Gbe SMB Direct RDMA Over Ethernet For Windows Server 2012 - Chelsio Communications". Chelsio Communications. 2013-04-02. Retrieved 2016-07-15.
The demonstration will show Microsoft's Windows Server 2012 SMB Direct running at line-rate 40Gb using RDMA over Ethernet (iWARP).
- ^ "Red Hat Enterprise MRG 2.0 Now Available". Archived from the original on 25 August 2016. Retrieved 23 June 2011.
External links
- RDMA Consortium
- RFC 5040: A Remote Direct Memory Access Protocol Specification
- A Tutorial of the RDMA Model
- "Why Compromise?" // HPCwire, Gilad Shainer (Mellanox Technologies), 2006
- A Critique of RDMA for high-performance computing