# Talk:Amdahl's law

WikiProject Computer science (Rated C-class, High-importance)
This article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C  This article has been rated as C-Class on the project's quality scale.
High  This article has been rated as High-importance on the project's importance scale.

## Untitled

anyone want to explain this a bit more in the article? What is the practical meaning of this? And is this a "real" law, as in law of physics, or a "approximate" law, like Moore's Law, or a "folklore law" like Murphy's? -- Tarquin

It's a practical engineering law for a vast class of imaginable computers. A nit-pick might be that in certain special cases it would be 1 / max(F, (1-F)/P), assuming that even the irreducible sequential bit can be done in parallel with the parallel bit, but the result for p --> infinity is the same: 1/F.
It's always possible that some new kind of computer will "break" Amdahl's law, but this would be the same as F --> 0, in which case Amdahl's law could still be held to be valid. The Anome

I just put up the general case of Amdhal's law with an explaination of what it means. Another example could still be put in to show the law of diminishing returns, which is really the "big picture" you're looking for, but I want to see the current version stabilize first. MShonle 02:43, 24 Jan 2004 (UTC)

Wow, just saw the recent deletion of the contents of the article. I can't imagine why someone would want to do that. Glad someone put it back up. MShonle 11:12, 23 Mar 2004 (UTC)
Because it is retarded beyond believing? 2 pages of formulas to explain Captain Obvious' finest work: "if a part of work cannot be shared, you can't do work faster than length of this part" and "profit = saving * relative_length_of_optimized_part"? Simple multiplication inflated to two fucking pages of scary math? No, really? I guess I'm off to post on some popular forum something along the lines of "the sun will rise tommorow", and then come back to write this extremely important information to WP. --Rowaa[SR13] (talk) 16:56, 3 May 2012 (UTC)

I was a little confused about this law, usually Wikipedia explains things so nicely but this article was a little hard to understand :). This link does some explaining as well, but is very long winded: http://www.phy.duke.edu/brahma/beowulf_book/node21.html --ShaunMacPherson 19:24, 8 Apr 2004 (UTC)

The link above does not work anymore, here is another one including PDF and PS

http://www.phy.duke.edu/~rgb/Beowulf/beowulf_book/

Do I misunderstand, or does the meaning of F flip-flop in the middle of this article. Yuck. GWO 15:35, 5 May 2004 (UTC)

In the second part of the article, 1-F and F should be swapped. Then, the Ps should be changed back to Fs. MShonle 17:55, 5 May 2004 (UTC)

## Trivial law

I think the problem with explaining this "law" is that it is too trivial. If you express it in durations rather than speedups and proportions then it boils down to something so obvious that nobody would even call it a law. Let me try:

Consider a process with multiple phases, such as getting out in the morning. Let's call the total duration of the process T and duration of one phase (such as eating breakfast), t. Duration of all other phases is then T-t. If we speed up the chosen phase by a factor of S, the time it takes to complete it will be t/S, and the total duration will become:

(T-t) + t/S

How hard is that?

You get the usual form of the law by taking a ratio of T by the above expression and using P for t/T.

-- bdolicki 14:35, 9 May 2005 (UTC)

Most call it a law, even though it may seem like a misnomer to you. However, there are insights to be gained even from trivial ideas. For example, imagine being stuck in a budget meeting and spending three hours talking about a budget item that wouldn't change the overall picture much. The idea that "little can be gained from this" wasn't obvious to everyone. (Indeed, some may say the best laws seem obvious in retrospect.) MShonle 22:24, 9 May 2005 (UTC)
I don't mind the name. If many people refer to it as Amdahl's law then an article under that name. My problem is this: The fact that it is trivial is not apparent from how it is presented in the article. On the first reading I got impression that it is something that I didn't know before. So let's change the presentation so that it becomes easier to follow, starting with the obvious and getting to the final form by a simple mathematical substitution. bdolicki 20:40, 10 May 2005 (UTC)
As I understand it, you just want to see the reciprocal first? The formula presented is just the reciprocal of what you want, which isn't much complication. Perhaps I'm misunderstanding? Further, Amdahl's law is concerned with "speed up," a technical unit, and changing the unit of the law doesn't seem worth it. Also, reducing to time taken for a particular run simplifies it, but misses the "big picture" point that is concerned with fractions taken (and is universal for all runs). MShonle 02:30, 11 May 2005 (UTC)
I was also quite confused by the naming of this "law" which is just basic division. — Preceding unsigned comment added by 173.230.162.148 (talk) 00:20, 10 November 2011 (UTC)
By the same reasoning F = ma is just basic multiplication. Just because the law is simple does not mean it is not useful or notable. Naming it a law is dictated by secondary sources which also call it Amdahl's Law. IRWolfie- (talk) 17:43, 13 February 2012 (UTC)

## Generalized Amdahl's law

I'd lake to make the observation that the addition of the Generalized Amdahl's law entry on this page isn't actually general, but applies only in the special-case where different sections of the program are sped-up. It would seem that this applies more toward software optimization (i.e. optimized one function to be 2x faster, and another function to be 3x faster), than it does to generalized speed-up.

If a program is sped-up in two different ways (i.e. part of the program is vectorized, and part of the program is threaded to run on multiple processors) then the generalized rule that is mentioned falls apart if there is any overlap between the two methods (i.e. some vector code is also parallelized across multiple processors).

A generalized rule (i.e. one that applies both to parallelization and software optimization) would probably be unnecessarily complex, but for two dimensions of parallelism, the speedup is:

$S_{total} = \frac{1}{\frac{P_1 - P_v}{S_1} + \frac{P_2 - P_v}{S_2} + \frac{P_v}{S_1S_2} + 1 + P_v - P_1 - P_2}$

Where:

• $P_1 \$ is the proportion of the program that can be improved from method #1 (i.e. multi-processing),
• $S_1 \$ is the speed-up applied for improvement #1,
• $P_2 \$ is the proportion of the program that can be improved from method #2 (i.e. vectorization),
• $S_2 \$ is the speed-up applied for improvement #2,
• $P_v \$ is the proportion of the program where method #1 and #2 overlap,
• $S_{total} \$ is the total resulting speed-up.

Note that the range over which $P_v$ is valid is necessarily constrained to the interval $[MAX(0, P_1+P_2-1) , MIN(P_1, P_2)]$.

It is also interesting to note that 4 processors with a 4-wide vector will out-perform 16 processors when $P_1 == P_2$ and there is any non-overlapping parallelism at all ($P_v < P_1$). If there is only overlapping parallelism, then the speedup is the same as 16 processors.

## SVG image

I made an SVG version of the graph at the top, but it probably has some flaws. It's at Image:AmdahlsLaw.svg—anyone want to improve and put it in? Daniels220 (talk) 19:10, 13 April 2008 (UTC)

## My Dears!

The whole Amdahl's argument is a dead triviality. I have never heard about it. But all of you were totally successful in creating a fairly complicated article about it.

Please, first do understand the argument, second write an article about it.

The formulae and the fog of complications in this article are useful to demonstrate to the mathematically less educated readers, that we are much more clever. But in fact, not.

prohlep (talk) 18:04, 10 August 2008 (UTC)

A very shallow googling indicates that Amdahl's argument evokes discussion on the web. So the article is probably notable, although a shallow googling without reading samples from the google is not an argument enough. A very shallow reading of this article on Amdahl's argument on the other hand indicates that the argument is as dead as last month's fish delivery, as you allege. In mathematics, an article about a statement that is later disproved in the text is never called a "law". But 1. this is computer science, 2. as long as people keep arguing against Amdahl's "law" it is notable and deserves an article on Wikipedia, 3. in accordance with that, the article could safely alleviate that the "law" is not valid. However, we may consider moving the article to "Amdahl's argument", which is less of a misnomer. ... said: Rursus (bork²) 09:54, 22 January 2009 (UTC)
It appears in the book "Neural Networks: A Systematic Introduction" by Raul Rojas. thedoctar (talk) 09:08, 1 July 2014 (UTC)

## Limitations and Superlinear speedup

Amdahl's law is well known in computer architecture, and I think this wikipedia entry is useful. However, I don't think that the Limitations section is correct. I searched for information about superlinear speedup, and from what I could see it is limited to cases where the sequential solution for the problem was flawed, and a more efficient parallel algorithm was developed. However, it does not imply that it is possible to achieve greater than linear speedup. The notes about differences in cache structure also do not directly contradict the argument that Amdahl made. Since there are no citations in the limitations section I am removing it because I do not believe it is correct. If someone thinks I am mistaken, please include citations so that readers will have some basis for evaluating the validity of the claim.Wikiuser1239 (talk) 21:21, 16 March 2009 (UTC)

hello.................................... —Preceding unsigned comment added by 117.196.227.53 (talk) 15:27, 24 August 2009 (UTC)

I think Amdahl's law can be misleading. It can only be applied when you look at 1 single isolated program that always does the same thing. You can't say that it is unnecessary to develop cpu's with only 32 core because beyond that you don't get enough improvement. Currently there are 52 processes running on my system which means I can only let them all run at exactly the same time if I have at least 52 cores. You also got the face the fact that in the future we will want more and more eye candy, speech recognition, ... All of that results in even more processes which can almost all parallelized. Also things like servers will always get better performance if the have more cores, given that bandwidth is not an issue. Because the server usually creates a new thread for every request. Kurr0Zaki (talk) 09:17, 26 October 2009 (UTC)

Your observation is pretty much irrelevant to Amdahl's law, which is extremely simple and not really up for debate. Amdahls law doesn't say anything remotely close to "it is unnecessary to develop cpu's with only 32 core because beyond that you don't get enough improvement" 129.108.234.59 (talk) 23:38, 18 October 2010 (UTC)

## Suggestion for a simpler/more general introduction

Hi, may I suggest a simpler version of the introduction, which explains the significance of the law and also sets it in a more general context:

Amdahl's law, also known as Amdahl's argument,[1] is named after computer architect Gene Amdahl, and defines a theoretical maximum for the ability of systems to scale up. "Scaling up" refers to the ability to do more work by adding more resources. The critical parameter in Amdahl's law is the percentage of work which cannot be parallelized - in other words, work which must be performed by one central component and cannot be divided between resources. The basic finding of the law is that as long as there is a non-zero percentage of work which cannot be parallelized, the system will not be able to scale beyond a certain theoretical maximum. The law also shows how to compute the theoretical maximum scale depending on the percentage, and some other parameters.

Amdahl's law is most often applied to parallel computing. In this context, the speedup of a program using multiple processors is limited by the time needed for the "sequential fraction" of the program to run - this is the percentage of work which cannot be parallelized. For example, if a program needs 20 hours using a single processor core, and a particular portion of 1 hour cannot be parallelized, while the remaining promising portion of 19 hours (95%) can be parallelized, then regardless of how many processors we devote to a parallelized execution of this program, the minimum execution time cannot be less than that critical 1 hour. Hence the speedup is limited up to 20×, as the diagram illustrates.

Anne.naimoli (talk) 12:27, 13 September 2011 (UTC)

The application of this would tie well to the concepts of Critical Path Method for project management. The tracing of the shortest path identifies the shortest time of execution. Please note a link to http://en.wikipedia.org/wiki/Critical_path_method — Preceding unsigned comment added by JimDelRossi (talkcontribs) 22:17, 23 January 2012 (UTC)