Singularity (software)

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Singularity-software-build and run example-screenshot.png
Singularity running a hello world container from the command-line.
Original author(s)Gregory Kurtzer (gmk), et al.
Gregory Kurtzer
Stable release
3.7.3 / 7 April 2021; 2 months ago (2021-04-07)[1]
Repository Edit this at Wikidata
Written inC, Go[2]
Operating systemLinux
TypeOperating-system-level virtualization
License3-clause BSD License[3]

Singularity is a free, cross-platform and open-source computer program that performs operating-system-level virtualization also known as containerization.[4]

One of the main uses of Singularity is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world.[5]

The need for reproducibility requires the ability to use containers to move applications from system to system.[6]

Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms.[7]

Usage workflow for Singularity containers


Singularity began as an open-source project in 2015, when a team of researchers at Lawrence Berkeley National Laboratory, led by Gregory Kurtzer, developed the initial version and released it[8] under the BSD license.[9]

By the end of 2016, many developers from different research facilities joined forces with the team at Lawrence Berkeley National Laboratory to further the development of Singularity[10]

Singularity quickly attracted the attention of computing-heavy scientific institutions worldwide:[11]

For two years in a row, in 2016 and 2017, Singularity was recognized by HPCwire editors as "One of five new technologies to watch".[19][20] In 2017 Singularity also won the first place for the category ″Best HPC Programming Tool or Technology″.[21]

As of 2018, based on the data entered on a voluntary basis in a public registry, Singularity user base is estimated to be greater than 25,000 installations[22] and includes users at academic institutions such as Ohio State University, and Michigan State University, as well as top HPC centers like Texas Advanced Computing Center, San Diego Supercomputer Center, and Oak Ridge National Laboratory.


Singularity is able to support natively high-performance interconnects, such as InfiniBand[23] and Intel Omni-Path Architecture (OPA).[24]

Similar to the support for InfiniBand and Intel OPA devices, Singularity can support any PCIe-attached device within the compute node, such as graphic accelerators.[25]

Singularity also has native support for Open MPI library by utilizing a hybrid MPI container approach where OpenMPI exists both inside and outside the container.[26]

These features make Singularity increasingly useful in areas such as Machine learning, Deep learning and most data-intensive workloads where the applications benefit from the high bandwidth and low latency characteristics of these technologies.[27]


HPC systems traditionally already have resource management and job scheduling systems in place, so the container runtime environments must be integrated into the existing system resource manager.

Using other enterprise container solutions like Docker in HPC systems would require modifications to the software.[28]

Singularity seamlessly integrates with many resource managers[29] including:

See also[edit]


  1. ^ "Releases · hpcng/singularity". Retrieved 29 April 2021.
  2. ^ "Singularity+GoLang". 14 February 2018.
  3. ^ "Singularity License". Singularity Team. 3 July 2018. Retrieved 10 July 2018.
  4. ^ "Singularity presentation at FOSDEM 17".
  5. ^ Kurtzer, Gregory M; Sochat, Vanessa; Bauer, Michael W (2017). "Singularity: Scientific containers for mobility of compute". PLOS ONE. 12 (5): e0177459. Bibcode:2017PLoSO..1277459K. doi:10.1371/journal.pone.0177459. PMC 5426675. PMID 28494014.
  6. ^ "Singularity, a container for HPC". 24 April 2016.
  7. ^ "Singularity Manual: Mobility of Compute".
  8. ^ "Sylabs brings Singularity containers into commercial HPC".
  9. ^ "Singularity License". Singularity Team. 19 March 2018. Retrieved 19 March 2018.
  10. ^ "Changes to the file in Singularity source code made in April 2017".
  11. ^ "Berkeley Lab's Open-Source Spinoff Serves Science". 7 June 2017.
  12. ^ "XStream online user manual, section on Singularity".
  13. ^ "XStream cluster overview".
  14. ^ "Sherlock Supercomputer: What's New, Containers and Deep Learning Tools".
  15. ^ "NIH HPC online user manual, section on Singularity".
  16. ^ "NIH HPC Systems".
  17. ^ "Singularity on the OSG".
  18. ^ "Singularity in CMS: Over a million containers served" (PDF).
  19. ^ "HPCwire Reveals Winners of the 2016 Readers' and Editors' Choice Awards at SC16 Conference in Salt Lake City".
  20. ^ "HPCwire Reveals Winners of the 2017 Readers' and Editors' Choice Awards at SC17 Conference in Denver".
  21. ^ "HPCwire Reveals Winners of the 2017 Readers' and Editors' Choice Awards at SC17 Conference in Denver".
  22. ^ "Voluntary registry of Singularity installations".
  23. ^ "Intel Advanced Tutorial: HPC Containers & Singularity – Advanced Tutorial – Intel" (PDF).
  24. ^ "Intel Application Note: Building Containers for Intel® Omni-Path Fabrics using Docker* and Singularity" (PDF).
  25. ^ "Singularity Manual: A GPU example".
  26. ^ "Intel Advanced Tutorial: HPC Containers & Singularity – Advanced Tutorial – Intel" (PDF).
  27. ^ Tallent, Nathan R; Gawande, Nitin A; Siegel, Charles; Vishnu, Abhinav; Hoisie, Adolfy (2018). Evaluating On-Node GPU Interconnects for Deep Learning Workloads. Lecture Notes in Computer Science. 10724. pp. 3–21. doi:10.1007/978-3-319-72971-8_1. ISBN 978-3-319-72970-1. S2CID 1674152.
  28. ^ Jonathan Sparks, Cray Inc. (2017). "HPC Containers in use" (PDF).
  29. ^ "Support on existing traditional HPC".
  30. ^ "HTCondor Stable Release Manual : Singularity Support".

Further reading[edit]

External links[edit]