Jump to content

Open MPI

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Seijiz (talk | contribs) at 08:15, 26 August 2018. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Open MPI
Stable release
3.1.2 / August 24, 2018; 6 years ago (2018-08-24)
Repository
Operating systemUnix, Linux, macOS, FreeBSD[1]
PlatformCross-platform
TypeLibrary
LicenseNew BSD License (free software)
Websitewww.open-mpi.org

Open MPI is a Message Passing Interface (MPI) library project combining technologies and resources from several other projects (FT-MPI, LA-MPI, LAM/MPI, and PACX-MPI). It is used by many TOP500 supercomputers including Roadrunner, which was the world's fastest supercomputer from June 2008 to November 2009,[2] and K computer, the fastest supercomputer from June 2011 to June 2012.[3][4]

Overview

Open MPI represents the merger between three well-known MPI implementations:

with contributions from the PACX-MPI team at the University of Stuttgart. These four institutions comprise the founding members of the Open MPI development team.

The Open MPI developers selected these MPI implementations as excelling in one or more areas. Open MPI aims to use the best ideas and technologies from the individual projects and create one world-class open-source MPI implementation that excels in all areas. The Open MPI project specifies several top-level goals:

  • to create a free, open source software, peer-reviewed, production-quality complete MPI-3.0 implementation
  • to provide extremely high, competitive performance (low latency or high bandwidth)
  • to involve the high-performance computing community directly with external development and feedback (vendors, 3rd party researchers, users, etc.)
  • to provide a stable platform for 3rd-party research and commercial development
  • to help prevent the "forking problem" common to other MPI projects[5]
  • to support a wide variety of high-performance computing platforms and environments

Code modules

The Open MPI code has 3 major code modules:

  • OMPI - MPI code
  • ORTE - the Open Run-Time Environment
  • OPAL - the Open Portable Access Layer

Commercial implementations

  • Sun HPC Cluster Tools - beginning with version 7, Sun switched to Open MPI
  • Bullx MPI—In 2010 Bull announced the release of bullx MPI, based on Open MPI[6]

See also

References

  1. ^ https://www.freshports.org/net/openmpi2
  2. ^ Jeff Squyres. "Open MPI: 10^15 Flops Can't Be Wrong" (PDF). Open MPI Project. Retrieved 2011-09-27.
  3. ^ "Programming on K computer" (PDF). Fujitsu. Retrieved 2012-01-17.
  4. ^ "Open MPI powers 8 petaflops". Cisco Systems. Retrieved 2011-09-27.
  5. ^ Preventing forking is a goal; how will you enforce that?
  6. ^ Aurélie Negro. "Bull launches bullx supercomputer suite". Bull SAS. Retrieved 2013-09-27. {{cite web}}: Unknown parameter |deadurl= ignored (|url-status= suggested) (help)