Talk:Embarrassingly parallel

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Name origin[edit]

Why is it called embarrasingly? What's the reasoning behind the word use? —The preceding unsigned comment was added by 69.25.246.40 (talkcontribs) 17:44, 23 June 2005 (UTC)

Maybe it's started as a criticism on a particular problem that might have been used as a benchmark. This is just second-guessing tho. As massively parallel tasks are the easiest to parallelizize (by definition), it doesn't get much respect from anyone if you have a computer that's capable of doing that. 62.220.237.66 13:30, 23 April 2006 (UTC)[reply]
Probably because it refers to problems that are so easy to parallelize, that it would be very embarrassing if your parallel computing system failed to do so. Compare "embarrassingly obvious". --Piet Delport 18:25, 27 September 2006 (UTC)[reply]
Agree with Piet Delport. In general, parallelizing algorithms is difficult. The difference between "embarassingly parallel" and "normally parallel" is analogous to trying to coordinate a team of cooks to each produce one single food dish, compared with using the same team for the same task where each cook must contribute to each dish. The former is embarassingly parallel. The latter is not, and is very much less efficient. Markjamesabraham (talk) 15:44, 4 May 2010 (UTC)[reply]
Could you say that an "embarrassingly parallel" problem is one so trivial to parallelize that afterwards you feel embarrassed (or at least a bit silly) for thinking that it was a problem in the first place? "I've parallelized that algorithm! In other news, I tied my own shoelaces!" —Preceding unsigned comment added by 122.57.195.229 (talk) 10:43, 26 December 2010 (UTC)[reply]
Or, alternatively, it's when the problem is so simple that it feels as though it was deliberately set up to be easy - like suddenly finding yourself with dozens of fawning servants doing for you even though you feel you've done nothing to deserve them.
This section of the "Talk:" page was interesting and helpful. Should there perhaps be a "section" in the actual article, that includes some of this information? Perhaps called (something like) "== Etymology of the term =="? Or should there perhaps be an article (maybe in Wiktionary?) called Etymology of the term "embarrassingly parallel"? -- (and maybe this article could "link" to it, or point to it)? Just an idea... Of course, in a talk page, the rules about "original research" and "blatant speculation" are not as strict; would this be a problem? ...and if so, then, could the new section (or, new article) "refer" readers to this section of the talk page? --Mike Schwartz (talk) 17:54, 21 February 2011 (UTC)[reply]
In his blog[1] , Cleve Moler (of MATLAB) claims to have coined the term "Embarrassingly Parallel". --Amro (talk) 18:19, 12 November 2013 (UTC)[reply]

Game-tree search[edit]

I'm not sure I agree that game-tree search in artificial intelligence is in general (or even frequently) embarassingly parallel; while minimax is, alpha-beta is definitely not. --Kenta2 12:30, 27 December 2005 (UTC)[reply]

You are correct. Only the simplest implementations are. Any technique which uses gathered knowledge to search more effectively presets parallelistation problems. (Alpha-beta is an example). —The preceding unsigned comment was added by 84.193.175.91 (talkcontribs) 10:46, 12 January 2006 (UTC)

Pleasingly Parallel[edit]

I've also seen a growing use of the alternative term "pleasingly parallel", as a euphemism for this term, since "embarrassingly" can have negative connotations. -- Bovineone 17:59, 14 April 2006 (UTC)[reply]

Really? No-one thought to call it "trivially parallel"?? — Preceding unsigned comment added by 222.155.198.204 (talk) 01:21, 17 February 2014 (UTC)[reply]
The term doesn't mean that it's easy or trivial to parallelize something, just that it parallelizes nicely and provides large gains in speed. Something could be embarrassingly/pleasingly parallel but still be a nightmare to implement. --Bigpeteb (talk) 21:53, 27 February 2014 (UTC)[reply]

Counter Examples[edit]

I believe this term is also used to describe cloud computing, as in, cloud computing is useful to solve embarassingly parallel problems like millions of users requesting search. But it would be helpful here to have some counter examples...what are examples of parallel computing at the other end of the spectrum from this?

bobheuman 9:00, 23 Jun 2008(UTC)

Perhaps large and deep neural networks. The high connectivity between and within layers means tranches of neurons can't be separated and run independently from each other however much they are running "in parallel". — Preceding unsigned comment added by 125.238.199.71 (talk) 00:05, 27 June 2022 (UTC)[reply]

Disconcertingly serial[edit]

The phrase "disconcertingly serial" has three matches on Google, none of which are relevant. This should not be included as if it were standard terminology. —Preceding unsigned comment added by 125.238.28.39 (talk) 11:01, 5 October 2008 (UTC)[reply]

Two or more types of embarrassing parallelism?[edit]

Is there any mention in the literature for two or more types of embarrassing parallelism? I ask because although calculation of say... the Mandelbrot appears perfectly suited to the task, there is the factor that some pixels take many more times longer to calculate than other pixels. We're talking not just an offset of time, but a multiple.

I wonder if ray traced pixels suffer from a similar problem.--Skytopia (talk) 20:35, 19 December 2008 (UTC)[reply]

This is a common issue in "pleasantly" parallel computing and is related to the halting problem.
But the point is that none of the other pixels have to wait for the laggards before they get their turn at being rendered. (While you could interleave iterations for different pixels, that leads to its own host of problems with things like locality.) While the previous commenter notes that this does relate to the Halting Problem, it's hardly limited to "'pleasantly' parallel computing": if one of your Mandelbrot set pixels takes n times longer to calculate than another then it's going to take n times longer to calculate whether you calculate them in parallel or not (as the page notes, cf. Amdahl's Law).

How are simulation problems embarrassingly parallel?[edit]

Just wondering. —Preceding unsigned comment added by 75.42.128.49 (talk) 04:52, 21 March 2009 (UTC)[reply]


Added implemantation section[edit]

Thought it might be a good contribution if there was reference for major implementations of Parallel processing for various languages. What do you think? Talgalili (talk) 12:18, 30 August 2009 (UTC)[reply]

You mean things like Fortress? —Preceding unsigned comment added by 122.57.195.229 (talk) 11:05, 26 December 2010 (UTC)[reply]

Distributed Controversy[edit]

Is it ok to say that EP problems are different from distributed computing ones and are thus well suited to large distributed platforms? I see it in the intro. --Javalenok (talk) 13:20, 24 April 2012 (UTC)[reply]

Bitcoin[edit]

I'm a fan of bitcoin and studied maths but it took me some time to understand why: "In Bitcoin mining, blocks with different nonces can be hashed separately." is "embarrassingly parallel". In mining, a large set of data is hashed and then this small hash is hashed again like hash(hash + nonce). This can be delegated to several CPUs, GPUs, FPGAs (, ASICs?) or mining pool clients. Actually the whole list is pretty hard to understand for non-experts of the respective fields. What should be done? Should I explain a bit more about bitcoin mining with links? --Giszmo (talk) 23:34, 1 January 2013 (UTC)[reply]

References[edit]

  1. ^ Moler, Cleve. "The Intel Hypercube, part 2". Retrieved 12 November 2013.