|Developer(s)||Universidad Politécnica de Valencia|
16.11 / November 12, 2016
|License||Proprietary (Free for academic use)|
rCUDA, which stands for Remote CUDA, is a type of middleware software framework for remote GPU virtualization. Fully compatible with the CUDA application programming interface (API), it allows the allocation of one or more CUDA-enabled GPUs to a single application. Each GPU can be part of a cluster or running inside of a virtual machine. The approach is aimed at improving performance in GPU clusters that are lacking full utilization. GPU virtualization reduces the number of GPUs needed in a cluster, and in turn, leads to a lower cost configuration – less energy, acquisition, and maintenance.
The recommended distributed acceleration architecture is a high performance computing cluster with GPUs attached to only a few of the cluster nodes. When a node without a local GPU executes an application needing GPU resources, remote execution of the kernel is supported by data and code transfers between local system memory and remote GPU memory. rCUDA is designed to accommodate this client-server architecture. On one end, clients employ a library of wrappers to the high-level CUDA Runtime API, and on the other end, there is a network listening service that receives requests on a TCP port. Several nodes running different GPU-accelerated applications can concurrently make use of the whole set of accelerators installed in the cluster. The client forwards the request to one of the servers, which accesses the GPU installed in that computer and executes the request in it. Time-multiplexing the GPU, or in other words sharing it, is accomplished by spawning different server processes for each remote GPU execution request.
The rCUDA Framework enables the concurrent usage of CUDA-compatible devices remotely.
rCUDA employs the socket API for the communication between clients and servers. Thus, it can be useful in three different environments:
- Clusters. To reduce the number of GPUs installed in High Performance Clusters. This leads to energy savings, as well as other related savings like acquisition costs, maintenance, space, cooling, etc.
- Academia. In commodity networks, to offer access to a few high performance GPUs concurrently to many students.
- Virtual Machines. To enable the access to the CUDA facilities on the physical machine.
The current version of rCUDA (v16.11) supports CUDA version 8.0, excluding graphics interoperability. rCUDA 16.11 targets the Linux OS (for 64-bit architectures) on both client and server sides.
CUDA applications do not need any change in their source code in order to be executed with rCUDA.
- Duato, José; Igual, Francisco; Mayo, Rafael; Peña, Antonio; Quintana-Ortí, Enrique; Silla, Federico (August 25, 2009). "An Efficient Implementation of GPU Virtualization in High Performance Clusters". Lecture Notes in Computer Science. 6043. Euro-Par 2009 – Parallel Processing Workshops HPPC, HeteroPar, PROPER, ROIA, UNICORE, VHPC, Delft, The Netherlands: 385–394. doi:10.1007/978-3-642-14122-5_441. ISBN 978-3-642-14122-5.
- Duato, José; Peña, Antonio; Silla, Federico; Mayo, Rafael; Quintana-Ortí, Enrique (June 28, 2010). "rCUDA: Reducing the number of GPU-based accelerators in high performance clusters". High Performance Computing and Simulation (HPCS), 2010 International Conference on, Caen, France: 224–231. doi:10.1109/HPCS.2010.5547126. ISBN 978-1-4244-6827-0.
- Duato, José; Peña, Antonio; Silla, Federico; Mayo, Rafael; Quintana-Ortí, Enrique (September 13, 2011). "Performance of CUDA Virtualized Remote GPUs in High Performance Clusters.". International Conference on Parallel Processing (ICPP), 2011 International Conference on Taipei, Taiwan: 365–374. doi:10.1109/ICPP.2011.58. ISBN 978-1-4577-1336-1.