Jump to content

Massively parallel

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 192.91.235.243 (talk) at 16:29, 17 July 2015 (Fixed grammatical error.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In computing, massively parallel refers to the use of a large number of processors (or separate computers) to perform a set of coordinated computations in parallel (simultaneously).

In one approach, e.g., in grid computing the processing power of a large number of computers in distributed, diverse administrative domains, is opportunistically used whenever a computer is available.[1] An example is BOINC, a volunteer-based, opportunistic grid system, whereby the grid provides power only on a best effort basis.[2]

In another approach, a large number of processors are used in close proximity to each other, e.g., in a computer cluster. In such a centralized system the speed and flexibility of the interconnect becomes very important, and modern supercomputers have used various approaches ranging from enhanced Infiniband systems to three-dimensional torus interconnects.[3]

The term also applies to massively parallel processor arrays (MPPAs), a type of integrated circuit with an array of hundreds or thousands of central processing units (CPUs) and Random Access Memory (RAM) banks. These processors pass work to one another through a reconfigurable interconnect of channels. By harnessing a large number of processors working in parallel, an MPPA chip can accomplish more demanding tasks than conventional chips.[citation needed] MPPAs are based on a software parallel programming model for developing high-performance embedded system applications.

Goodyear MPP was an early implementation of a massively parallel computer architecture. MPP architectures are the second most common supercomputer implementations after clusters, as of November 2013.[4]

See also

References

  1. ^ Grid computing: experiment management, tool integration, and scientific workflows by Radu Prodan, Thomas Fahringer 2007 ISBN 3-540-69261-4 pages 1–4
  2. ^ Parallel and Distributed Computational Intelligence by Francisco Fernández de Vega 2010 ISBN 3-642-10674-9 pages 65–68
  3. ^ Knight, Will: "IBM creates world's most powerful computer", NewScientist.com news service, June 2007
  4. ^ http://s.top500.org/static/lists/2013/11/TOP500_201311_Poster.png