Blum's speedup theorem
Each computable function has an infinite number of different program representations in a given programming language. In the theory of algorithms one often strives to find a program with the smallest complexity for a given computable function and a given complexity measure (such a program could be called optimal). Blum's speedup theorem shows that for any complexity measure there are computable functions that are not optimal with respect to that measure. This also rules out the idea there is a way to assign to arbitrary functions their computational complexity, meaning the assignment to any f of the complexity of an optimal program for f. This does of course not exclude the possibility of finding the complexity of an optimal program for certain specific functions.
Given a Blum complexity measure and a total computable function with two parameters, then there exists a total computable predicate (a boolean valued computable function) so that for every program for , there exists a program for so that for almost all
is called the speedup function. The fact that it may be as fast-growing as desired (as long as it is computable) means that the phenomenon of always having a program of smaller complexity remains even if by "smaller" we mean "significantly smaller" (for instance, quadratically smaller, exponentially smaller).
- Blum, Manuel (1967). "A Machine-Independent Theory of the Complexity of Recursive Functions" (PDF). Journal of the ACM 14 (2): 322–336. doi:10.1145/321386.321395.
- Peter van Emde Boas, Ten years of speedup, Proceedings of MFCS (Jirí Becvár, ed.), Lecture Notes in Computer Science, vol. 32, Springer, 1975, pp. 13–29.