From Wikipedia, the free encyclopedia
Jump to: navigation, search

In parallel computing, speedup refers to how much a parallel algorithm is faster than a corresponding sequential algorithm.


Speedup is defined by the following formula:

S_p = \frac{T_1}{T_p}


Linear speedup or ideal speedup is obtained when \,S_p = p. When running an algorithm with linear speedup, doubling the number of processors doubles the speed. As this is ideal, it is considered very good scalability.

Efficiency is a performance metric defined as

E_p = \frac{S_p}{p} = \frac{T_1}{pT_p}.

It is a value, typically between zero and one, estimating how well-utilized the processors are in solving the problem, compared to how much effort is wasted in communication and synchronization. Algorithms with linear speedup and algorithms running on a single processor have an efficiency of 1, while many difficult-to-parallelize algorithms have efficiency such as \frac{1}{\ln p}[citation needed] that approaches zero as the number of processors increases.

In engineering contexts, efficiency is more often used for graphs than speedup, since

  • all of the area in the graph is useful (whereas in a speedup curve 1/2 of the space is wasted)
  • it is easy to see how well parallelization is working
  • there is no need to plot a "perfect speedup" line

In marketing contexts, speedup curves are more often used, largely because they go up and to the right and thus appear better to the less-informed.

Super linear speedup[edit]

Sometimes a speedup of more than p when using p processors is observed in parallel computing, which is called super linear speedup. Super linear speedup rarely happens and often confuses beginners, who believe the theoretical maximum speedup should be p when p processors are used.

One possible reason for a super linear speedup is the cache effect resulting from the different memory hierarchies of a modern computer: In parallel computing, not only do the numbers of processors change, but so does the size of accumulated caches from different processors. With the larger accumulated cache size, more or even all of the working set can fit into caches and the memory access time reduces dramatically, which causes the extra speedup in addition to that from the actual computation.[1]

An analogous situation occurs when searching large datasets, such as the genomic data searched by BLAST implementations. There the accumulated RAM from each of the nodes in a cluster enables the dataset to move from disk into RAM thereby drastically reducing the time required by e.g. mpiBLAST to search it.

Super linear speedups can also occur when performing backtracking in parallel: An exception in one thread can cause several other threads to backtrack early, before they reach the exception themselves.[citation needed]


  1. ^ John Benzi; M. Damodaran (2007). "Parallel Three Dimensional Direct Simulation Monte Carlo for Simulating Micro Flows". Parallel Computational Fluid Dynamics 2007: Implementations and Experiences on Large Scale and Grid Computing. Parallel Computational Fluid Dynamics. Springer. p. 95. Retrieved 2013-03-21. 

See also[edit]