Jump to content

Embarrassingly parallel

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Accelerometer (talk | contribs) at 07:56, 12 October 2013 (Examples: not all fractals, and not just fractals). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In parallel computing, an embarrassingly parallel workload, or embarrassingly parallel problem, is one for which little or no effort is required to separate the problem into a number of parallel tasks. This is often the case where there exists no dependency (or communication) between those parallel tasks.[1]

Embarrassingly parallel problems (also called "pleasingly parallel problems") tend to require little or no communication of results between tasks, and are thus different from distributed computing problems that require communication between tasks, especially communication of intermediate results. They are easy to perform on server farms which do not have any of the special infrastructure used in a true supercomputer cluster. They are thus well suited to large, internet based distributed platforms such as BOINC.

A common example of an embarrassingly parallel problem lies within graphics processing units (GPUs) for the task of 3D projection, where each pixel on the screen may be rendered independently.

Etymology of the term

The etymology of the phrase "embarrassingly parallel" is not known, but it's believed to be a comment on the ease of parallelizing such applications, and that it would be embarrassing for the programmer or compiler to not take advantage of such an obvious opportunity to improve performance. It is first found in the literature in a 1986 book on multiprocessors.[2]

An alternative term, "pleasingly parallel," has gained some use, perhaps to avoid the negative connotations of embarrassment in favor of a positive reflection on the parallelizability of the problems.

Examples

Some examples of embarrassingly parallel problems include:

Implementations

See also

References

  1. ^ Section 1.4.4 of: Foster, Ian (1995). "Designing and Building Parallel Programs". Addison–Wesley (ISBN 9780201575941). Archived from the original on 2011-02-21.
  2. ^ Moler, Cleve (1986). Heath, Michael T. (ed.). "Matrix Computation on Distributed Memory Multiprocessors". Hypercube Multiprocessors. Society for Industrial and Applied Mathematics, Philadelphia. ISBN 0898712092.
  3. ^ SeqAnswers forum
  4. ^ How we made our face recognizer 25 times faster (developer blog post)