From Wikipedia, the free encyclopedia
Jump to: navigation, search

SHMEM (from Symmetric Hierarchical Memory access) - family of parallel programming libraries, initially providing remote memory access for big shared memory supercomputers using one-sided communications.[1] Later it was expanded to distributed memory parallel computer clusters, and is used as parallel programming interface or as low-level interface to build PGAS systems and languages.[2] The first SHMEM library, libsma, was created by Cray in 1993 year. Later the SHMEM was also implemented by SGI, Quadrics, HP, GSHMEM, IBM, QLogic, Mellanox, Universities of Houston and Florida; there is also opensource OpenSHMEM.[3]

Historically, SHMEM, the earliest one-sided library,[4] made the one-sided parallel programming paradigm popular.[5]

Program, written using SHMEM can be started on several computers, connected together with some high-performance network, supported by used SHMEM library. Every computer run a copy of program (SPMD), each copy is called PE (processing element). PEs can ask library to do remote memory access operations, like reading ("shmem_get" operation) or writing ("shmem_put" operation) data. Peer-to-peer operations are one-sided, it means that no active cooperation from remote thread is needed to complete the action (but it can poll its local memory for changes using "shmem_wait"). Operations can be done on short types like bytes, words, or on longer datatypes like arrays, sometimes even evenly strided or indexed (only some elements of array are send). For short datatypes SHMEM can do atomic operations (CAS, Fetch and add, atomic increment, etc.) even in remote memory. Also there are two different synchronization methods:[3] task control sync (barriers and locks) and functions to enforce memory fencing and ordering. SHMEM has several collective operations, which should be started by all PEs, like reductions, broadcast, collect.

Every PEs has some of it memory declared as "Symmetric" segment (or shared memory area) and other memory is private. Only "shared" memory can be accessed in one-sided operation from remote PEs. It is possible to create symmetric objects which has same address on every PE.

Typical SHMEM functions[edit]

  • start_pes(N) - start N processing elements (PE)
  • _my_pe() - ask SHMEM to return the PE identifier of current thread
  • shmem_barrier_all() - wait until all PEs run up to barrier; then enable them to go further
  • shmem_put(target, source, length, pe) - write data of length "length" to the remote address "target" on PE with id "pe" from local address "source"
  • shmem_get(target, source, length, pe) - read data of length "length" from the remote address "source" on PE with id "pe" and save to read values into local address "target"[6]

List of SHMEM implementations[edit]

  • SGI: SGI-SHMEM for systems with NUMALink and Altix build with Infiniband network adapters
  • Cray's original SHMEM for T3D, T3E, PVP supercomputers[7]
  • Cray: MP-SHMEM for Unicos MP (X1E supercomputer)
  • Cray: LC-SHMEM for Unicos LC (Cray XT3, XT4, XT5)
  • Quadrics: Q-SHMEM[8] for Linux clusters with QsNet interconnect[7]
  • Cyclops-64 SHMEM
  • HP SHMEM[7]
  • IBM SHMEM[7]
  • GPSHMEM[7]

OpenSHMEM implementations (standard effort by SGI)

  • University of Houston: Reference OpenSHMEM[3][7]
  • Mellanox ScalableSHMEM[7]
  • Portals-SHMEM (on top of Portals interface)
  • University of Florida: Gator SHMEM[7]


In first years SHMEM was accessible only on some Cray machines (later additionally on SGI)[9] equipped with special networks, limiting library widespread and being vendor lock-in (for example, Cray recommends to partially rewrite MPI programs to combine both MPI and shmem calls, which make the program non-portable to other clear-MPI environment).

SHMEM was not defined as standard,[7][9] so there were created several incompatible variants of SHMEM libraries by other vendors. Libraries has different include file names, different management function names for starting PEs or getting current PE id,[7] some functions were changed or not supported.

Some SHMEM routines were designed according to Cray T3D architecture limitations, for example reductions and broadcasts can be started only on subsets of PEs with size being power of two.[1][7]

Now there are variants of SHMEM libraries, which can run on top of any MPI library, even when cluster has only slow Ethernet, but their performance is worse.

Memory in shared region should be allocated using special functions (shmalloc/shfree), not with the system malloc.[7]

There is easy to write program with deadlock on SHMEM.[10]

SHMEM is available only for C and Fortran (some versions also to C++).[7]

See also[edit]


  1. ^ a b Introduction to Parallel Computing - 3.11 Related Work // cse590o course, University of Washington, Winter 2002; page 154
  2. ^ "New Accelerations for Parallel Programming". Mellanox. 2012. Retrieved 18 January 2014. "SHMEM is being used/proposed as a lower level interface for PGAS implementations" 
  3. ^ a b c Poole, Stephen (2011). "OpenSHMEM - Toward a Unified RMA Model". Encyclopedia of Parallel Computing: 1379–1391. Retrieved 2013-01-15. 
  4. ^ Tools for Benchmarking, Tracing, and Simulating SHMEM Applications // CUG 2012, paper by San Diego Supercomputer center and ORNL
  5. ^ Recent Advances in Parallel Virtual Machine and Message Passing ..., Volume 11 page 59: "One-sided communication as a programming paradigm was made popular initially by the SHMEM library on the Cray T3D and T3E..."
  6. ^ man shmem_get (SGI TPL)
  7. ^ a b c d e f g h i j k l m OpenSHMEM TUTORIAL // University of Houston, Texas, 2012
  8. ^ Shmem Programming Manual // Quadrics, 2000-2001
  9. ^ a b SHMEM // Cray, Document 004-2178-002, chapter 3
  10. ^

Further reading[edit]

External links[edit]