Galactic algorithm

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

A galactic algorithm is one that outperforms any other algorithm for problems that are sufficiently large, but where "sufficiently large" is so big that the algorithm is never used in practice. Galactic algorithms were so named by Richard Lipton and Ken Regan,[1] as they will never be used on any of the merely terrestrial data sets we find here on Earth.

An example of a galactic algorithm is the fastest known way to multiply two numbers,[2] which is based on a 1729-dimensional Fourier transform.[3] This means it will not reach its stated efficiency until the numbers have at least 2172912 bits (at least 101038 digits), which is vastly larger than the number of atoms in the known universe. So this algorithm is never used in practice.[4]

Despite the fact that they will never be used, galactic algorithms may still contribute to computer science:

  • An algorithm, even if impractical, may show new techniques that may eventually be used to create practical algorithms.
  • Computer sizes may catch up to the crossover point, so that a previously impractical algorithm becomes practical.
  • An impractical algorithm can still demonstrate that conjectured bounds can be achieved, or that proposed bounds are wrong. As Lipton says "This alone could be important and often is a great reason for finding such algorithms. For example, if tomorrow there were a discovery that showed there is a factoring algorithm with a huge but provably polynomial time bound, that would change our beliefs about factoring. The algorithm might never be used, but would certainly shape the future research into factoring." Similarly, an algorithm for the Boolean satisfiability problem, although unusable in practice, would settle the P versus NP problem, the most important open problem in computer science and one of the Millennium Prize Problems.[5]

Examples[edit]

There are several well-known algorithms with world-beating asymptotic behavior, but only on impractically large problems:

  • Matrix multiplication: The first improvement over brute-force multiplication, O(N3) is the Strassen algorithm, a recursive algorithm that is O(N2.807). This algorithm is not galactic and is used in practice. Further extensions of this, using sophisticated group theory, are the Coppersmith–Winograd algorithm and its slightly better successors, delivering O(N2.373). These are galactic – "We nevertheless stress that such improvements are only of theoretical interest, since the huge constants involved in the complexity of fast matrix multiplication usually make these algorithms impractical."[6]
  • Claude Shannon showed a simple but impractical code that could reach the capacity of a channel. It requires assigning a random code word to every possible N bit message, then decoding by finding the closest code word. If N is chosen large enough, this beats any existing code and can get arbitrarily close to the capacity of the channel. Unfortunately, any N big enough to beat existing codes is also completely impractical.[7] These codes, though never used, inspired decades of research into more practical algorithms that today can achieve rates arbitrarily close to channel capacity.[8]
  • The problem of deciding whether a graph G contains H as a minor is NP-complete in general, but where H is fixed, it can be solved in polynomial time. The running time for testing whether H is a minor of G in this case is O(n2),[9] where n is the number of vertices in G and the big O notation hides a constant that depends superexponentially on H. The constant is greater than (using Knuth's up-arrow notation), where h is the number of vertices in H.[10]
  • For cryptographers, a cryptographic "break" is anything faster than a brute-force attack – i.e., performing one trial decryption for each possible key. In many cases, even though they are the best known methods, they are still infeasible with current technology. One example is the best attack known against 128 bit AES, which takes only 2126 operations.[11] Despite being impractical, theoretical breaks can sometimes provide insight into vulnerability patterns.
  • For several decades, the best known approximation to the traveling salesman problem in a metric space was the very simple Christofides algorithm which produced a path at most 50% longer than the optimum. (Many other algorithms could usually do much better, but could not guarantee success.) In 2020, a newer and much more complex algorithm was discovered that can beat this by 10−34 percent.[12] Although no one will ever switch to this algorithm for any real problem, it is still considered important because "this minuscule improvement breaks through both a theoretical logjam and a psychological one".[13]
  • There exists a single algorithm known as "Hutter search" that can solve any well-defined problem in an asymptotically optimal time, barring some caveats. However, its hidden constants in the running time are so large it would never be practical for anything.[14][15]

References[edit]

  1. ^ Lipton, Richard J., and Kenneth W. Regan (2013). "David Johnson: Galactic Algorithms". People, Problems, and Proofs. Heidelberg: Springer Berlin. pp. 109–112.CS1 maint: multiple names: authors list (link)
  2. ^ David, Harvey; Hoeven, Joris van der (March 2019). "Integer multiplication in time O(n log n)". HAL. hal-02070778.
  3. ^ David Harvey (April 2019). "We've found a quicker way to multiply really big numbers". Phys.org.
  4. ^ "We've found a quicker way to multiply really big numbers". Quote, from one of the authors of the algorithm: "The new algorithm is not really practical in its current form, because the proof given in our paper only works for ludicrously large numbers. Even if each digit was written on a hydrogen atom, there would not be nearly enough room available in the observable universe to write them down."
  5. ^ Fortnow, L. (2009). "The status of the P versus NP problem" (PDF). Communications of the ACM. 52 (9): 78–86. doi:10.1145/1562164.1562186.
  6. ^ Le Gall, F. (2012), "Faster algorithms for rectangular matrix multiplication", Proceedings of the 53rd Annual IEEE Symposium on Foundations of Computer Science (FOCS 2012), pp. 514–523, arXiv:1204.1111, doi:10.1109/FOCS.2012.80
  7. ^ Larry Hardesty (January 19, 2010). "Explained: The Shannon limit". MIT News Office.
  8. ^ "Capacity-Approaching Codes" (PDF).
  9. ^ Kawarabayashi, K. I.; Kobayashi, Y.; Reed, B. (2012). "The disjoint paths problem in quadratic time". Journal of Combinatorial Theory, Series B. 102 (2): 424–435. doi:10.1016/j.jctb.2011.07.004.
  10. ^ Johnson, David S. (1987). "The NP-completeness column: An ongoing guide (edition 19)". Journal of Algorithms. 8 (2): 285–303. CiteSeerX 10.1.1.114.3864. doi:10.1016/0196-6774(87)90043-5.
  11. ^ Biaoshuai Tao & Hongjun Wu (2015). Information Security and Privacy. Lecture Notes in Computer Science. 9144. pp. 39–56. doi:10.1007/978-3-319-19962-7_3. ISBN 978-3-319-19961-0.
  12. ^ Anna R. Karlin; Nathan Klein; Shayan Oveis Gharan (September 1, 2020). "A (Slightly) Improved Approximation Algorithm for Metric TSP". arXiv:2007.01409 [cs.DS].
  13. ^ Erica Klarreich (October 8, 2020). "Computer Scientists Break Traveling Salesperson Record".
  14. ^ Hutter, Marcus (2002-06-14). "The Fastest and Shortest Algorithm for All Well-Defined Problems". arXiv:cs/0206022.
  15. ^ Gagliolo, Matteo (2007-11-20). "Universal search". Scholarpedia. 2 (11): 2575. Bibcode:2007SchpJ...2.2575G. doi:10.4249/scholarpedia.2575. ISSN 1941-6016.