||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (May 2008)|
In parallel computing, an embarrassingly parallel workload, or embarrassingly parallel problem, is one for which little or no effort is required to separate the problem into a number of parallel tasks. This is often the case where there exists no dependency (or communication) between those parallel tasks.
Embarrassingly parallel problems (also called "pleasingly parallel problems") tend to require little or no communication of results between tasks, and are thus different from distributed computing problems that require communication between tasks, especially communication of intermediate results. They are easy to perform on server farms which do not have any of the special infrastructure used in a true supercomputer cluster. They are thus well suited to large, internet based distributed platforms such as BOINC.
Etymology of the term 
The etymology of the phrase "embarrassingly parallel" is not known, but it's believed to be a comment on the ease of parallelizing such applications, and that it would be embarrassing for the programmer or compiler to not take advantage of such an obvious opportunity to improve performance. It is first found in the literature in a 1986 book on multiprocessors.
An alternate term, "pleasingly parallel," has gained some use, perhaps to avoid the negative connotations of embarrassment in favor of a positive reflection on the parallelizability of the problems.
Some examples of embarrassingly parallel problems include:
- Distributed relational database queries using distributed set processing
- Serving static files on a webserver to multiple users at once.
- The Mandelbrot set and other fractal calculations, where each point can be calculated independently.
- Rendering of computer graphics. In computer animation, each frame may be rendered independently (see parallel rendering).[dubious ]
- Brute-force searches in cryptography. A notable real-world example is distributed.net.
- BLAST searches in bioinformatics for multiple queries (but not for individual large queries) 
- Large scale face recognition that involves comparing thousands of arbitrary acquired faces (e.g. a security or surveillance video via closed-circuit television) with similarly large number of previously stored faces (e.g., a "rogues gallery" or similar watch list).
- Computer simulations comparing many independent scenarios, such as climate models.
- Genetic algorithms and other evolutionary computation metaheuristics.
- Ensemble calculations of numerical weather prediction.
- Event simulation and reconstruction in particle physics.
- Sieving step of the quadratic sieve and the number field sieve.
- Tree growth step of the random forest machine learning technique.
- In Bitcoin mining, blocks with different nonces can be hashed separately
- In R (programming language) – The snow (Simple Network of Workstations) package implements a simple mechanism for using a collection of workstations or a Beowulf cluster for embarrassingly parallel computations.
See also 
- Amdahl's law – an embarrassingly parallel problem would have P almost or exactly equal to 1.
- Section 1.4.4 of: Foster, Ian (1995). "Designing and Building Parallel Programs". Addison–Wesley (ISBN 9780201575941). Archived from the original on 2011-02-21.
- Moler, Cleve (1986). "Matrix Computation on Distributed Memory Multiprocessors". In Heath, Michael T. Hypercube Multiprocessors (Society for Industrial and Applied Mathematics, Philadelphia). ISBN 0898712092.
- SeqAnswers forum
- How we made our face recognizer 25 times faster (developer blog post)
- Embarrassingly parallel, Parallel algorithms
- Embarrassingly Parallel Computations, Engineering a Beowulf-style Compute Cluster
- "Star-P: High Productivity Parallel Computing"