Jump to content

Speedup: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m typoes :P
No edit summary
Line 10: Line 10:
* <math>T_p</math> is the execution time of the [[parallel algorithm]] with ''p'' [[central processing unit|processor]]s
* <math>T_p</math> is the execution time of the [[parallel algorithm]] with ''p'' [[central processing unit|processor]]s


'''Linear speedup''' or '''ideal speedup''' is obtained when <math>\,S_p = p</math>. When running an algorithm with linear speedup, doubling the number of processors doubles the speed which, as it is ideal, is considered very good [[scalability]].
'''Linear speedup''' or '''ideal speedup''' is obtained when <math>\,S_p = p</math>. When running an algorithm with linear speedup, doubling the number of processors doubles the speed. As this is ideal, is considered very good [[scalability]].


'''Efficiency''' is a performance metric defined as <math>E_p = \frac{S_p}{p}</math>. It is a value, typically between zero and one, estimating how well-utilized the processors are in solving the problem, compared to how much effort is wasted in communication and synchronization. Algorithms with linear speedup and algorithms running on a single processor have an efficiency of 1, while many difficult-to-parallelize algorithms have efficiency such as <math>\frac{1}{\log p}</math> that approaches zero as the number of processors increases.
'''Efficiency''' is a performance metric defined as <math>E_p = \frac{S_p}{p}</math>. It is a value, typically between zero and one, estimating how well-utilized the processors are in solving the problem, compared to how much effort is wasted in communication and synchronization. Algorithms with linear speedup and algorithms running on a single processor have an efficiency of 1, while many difficult-to-parallelize algorithms have efficiency such as <math>\frac{1}{\log p}</math> that approaches zero as the number of processors increases.

Revision as of 14:59, 9 May 2007

In parallel computing, speedup refers to how much a parallel algorithm is faster than a corresponding sequential algorithm.

It is defined by the following formula:

where:

Linear speedup or ideal speedup is obtained when . When running an algorithm with linear speedup, doubling the number of processors doubles the speed. As this is ideal, is considered very good scalability.

Efficiency is a performance metric defined as . It is a value, typically between zero and one, estimating how well-utilized the processors are in solving the problem, compared to how much effort is wasted in communication and synchronization. Algorithms with linear speedup and algorithms running on a single processor have an efficiency of 1, while many difficult-to-parallelize algorithms have efficiency such as that approaches zero as the number of processors increases.

Super linear speedup: Sometimes a speedup of more than N when using N processors is observed in parallel computing, which is called super linear speedup. Super linear speedup rarely happens and often confuses beginners, who believe the theoretical maximum speedup should be N when N processors are used. However, it is understandable due to the cache effect resulting from the different memory hierarchies of a modern computer: In parallel computing, not only the numbers of processors change, so does the size of accumulated caches from different processors. With the larger accumulated cache size, more or even all core data set can fit into caches and the memory access time reduces dramatically, which causes the extra speedup in addition to that from the actual computation.

See also