Jump to content

Coppersmith–Winograd algorithm: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Made more clear that the algorithm is not explicitly constructed.
Undid revision 295908561 by 131.111.213.41 (talk) This edit leaves the article in an inconsistent state, see talk.
Line 1: Line 1:
In the [[mathematics|mathematical]] discipline of [[linear algebra]], the '''Coppersmith–Winograd algorithm''', named after [[Don Coppersmith]] and [[Shmuel Winograd]], is the asymptotically fastest known [[algorithm]] for square [[matrix multiplication]] as of 2008. It can multiply two <math>n \times n</math> matrices in <math>O(n^{2.376}) \!\ </math> time (see [[Big O notation]]). This is an improvement over the trivial <math>O(n^3) \!\ </math> time algorithm and the <math>O(n^{2.807}) \!\ </math> time [[Strassen algorithm]]. It might be possible to improve the exponent further; however, the exponent must be at least 2 (because an <math>n \times n</math> matrix has <math>n^2</math> values, and all of them have to be read at least once to calculate the exact result). Note that no explicit construction of the algorithm is given as Coppersmith and Winograd gave a non-constructive proof of its existence.
In the [[mathematics|mathematical]] discipline of [[linear algebra]], the '''Coppersmith–Winograd algorithm''', named after [[Don Coppersmith]] and [[Shmuel Winograd]], is the asymptotically fastest known [[algorithm]] for square [[matrix multiplication]] as of 2008. It can multiply two <math>n \times n</math> matrices in <math>O(n^{2.376}) \!\ </math> time (see [[Big O notation]]). This is an improvement over the trivial <math>O(n^3) \!\ </math> time algorithm and the <math>O(n^{2.807}) \!\ </math> time [[Strassen algorithm]]. It might be possible to improve the exponent further; however, the exponent must be at least 2 (because an <math>n \times n</math> matrix has <math>n^2</math> values, and all of them have to be read at least once to calculate the exact result).


The Coppersmith–Winograd algorithm is frequently used as a building block in other algorithms to prove theoretical time bounds. However, unlike the Strassen algorithm, it is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware {{harv|Robinson|2005}}.
The Coppersmith–Winograd algorithm is frequently used as a building block in other algorithms to prove theoretical time bounds. However, unlike the Strassen algorithm, it is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware {{harv|Robinson|2005}}.

Revision as of 03:23, 12 June 2009

In the mathematical discipline of linear algebra, the Coppersmith–Winograd algorithm, named after Don Coppersmith and Shmuel Winograd, is the asymptotically fastest known algorithm for square matrix multiplication as of 2008. It can multiply two matrices in time (see Big O notation). This is an improvement over the trivial time algorithm and the time Strassen algorithm. It might be possible to improve the exponent further; however, the exponent must be at least 2 (because an matrix has values, and all of them have to be read at least once to calculate the exact result).

The Coppersmith–Winograd algorithm is frequently used as a building block in other algorithms to prove theoretical time bounds. However, unlike the Strassen algorithm, it is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware (Robinson 2005).

Henry Cohn, Robert Kleinberg, Balázs Szegedy and Christopher Umans have rederived the Coppersmith–Winograd algorithm using a group-theoretic construction. They also show that either of two different conjectures would imply that the exponent of matrix multiplication is 2, as has long been suspected.

References

  • Henry Cohn, Robert Kleinberg, Balazs Szegedy, and Chris Umans. Group-theoretic Algorithms for Matrix Multiplication. arXiv:math.GR/0511460. Proceedings of the 46th Annual Symposium on Foundations of Computer Science, 23-25 October 2005, Pittsburgh, PA, IEEE Computer Society, pp. 379–388.
  • Don Coppersmith and Shmuel Winograd. Matrix multiplication via arithmetic progressions. Journal of Symbolic Computation, 9:251–280, 1990.
  • Robinson, Sara (2005), "Toward an Optimal Algorithm for Matrix Multiplication" (PDF), SIAM News, 38 (9).