Pollack's Rule

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Pollack's Rule states that microprocessor "performance increase due to microarchitecture advances is roughly proportional to [the] square root of [the] increase in complexity". This contrasts with power consumption increase, which is roughly linearly proportional to the increase in complexity. Complexity in this context means processor logic, i.e. its area.

The rule, which is an industry term, is named for Fred Pollack, a lead engineer and fellow at Intel.

Pollack's Rule gained increasing relevance in 2008 due to the broad adoption of multi-core computing and concern expressed by businesses and individuals at the huge electricity demands of computers.

A generous interpretation of the rule allows for the case in which an ideal device could contain hundreds of low-complexity cores, each operating at very low power and together performing large amounts of (processing) work quickly. This describes a massively parallel processor array (MPPA), which is currently being used in embedded systems and hardware accelerators.

Implications of the rule on chip performance[edit]

According to Moore's law, each new technology generation doubles the number of transistors on chip increasing performance by 40%. On the other hand, Pollack's rule implies that microarchitecture advances improve the performance by another 40%. Therefore, the overall performance increase is roughly two-fold, while the power consumption stays the same. In practice, however, implementing new microarchitecture every new generation is difficult, so microarchitecture gains are typically less.[1]

External sources and links[edit]

  1. ^ Shekhar Borkar, Andrew A. Chien (May 2011). "The Future of Microprocessors". Communications of ACM 54 (5).