# Parallel computation thesis

In computational complexity theory, the parallel computation thesis is a hypothesis which states that the time used by a (reasonable) parallel machine is polynomially related to the space used by a sequential machine. The parallel computation thesis was set forth by Chandra and Stockmeyer in 1976.[1]

In other words, for a computational model which allows computations to branch and run in parallel without bound, a formal language which is decidable under the model using no more than $t(n)$ steps for inputs of length n is decidable by a non-branching machine using no more than $t(n)^k$ units of storage for some constant k. Similarly, if a machine in the unbranching model decides a language using no more than $s(n)$ storage, a machine in the parallel model can decide the language in no more than $s(n)^k$ steps for some constant k.

The parallel computation thesis is not a rigorous formal statement, as it does not clearly define what constitutes an acceptable parallel model. A parallel machine must be sufficiently powerful to emulate the sequential machine in time polynomially related to the sequential space; compare Turing machine, non-deterministic Turing machine, and alternating Turing machine. N. Blum (1983) introduced a model for which the thesis does not hold.[2] However, the model allows $2^{2^{O(T(n))}}$ parallel threads of computation after $T(n)$ steps. (See Big O notation.) Parberry (1986) suggested a more "reasonable" bound would be $2^{O(T(n))}$ or $2^{T(n)^{O(1)}}$, in defense of the thesis.[3] Goldschlager (1982) proposed a model which is sufficiently universal to emulate all "reasonable" parallel models, which adheres to the thesis.[4] Chandra and Stockmeyer originally formalized and proved results related to the thesis for deterministic and alternating Turing machines, which is where the thesis originated.[5]

## References

1. ^ Chandra, Ashok K.; Stockmeyer, Larry J. (1976). "Alternation". FOCS'76: Proceedings of the 17th Annual Symposium on Foundations of Computer Science. pp. 98–108. doi:10.1109/SFCS.1976.4.
2. ^ Blum, Norbert (1983). "A note on the 'parallel computation thesis'". Information Processing Letters 17 (4): 203–205. doi:10.1016/0020-0190(83)90041-8.
3. ^ Parberry, I. (1986). "Parallel speedup of sequential machines: a defense of parallel computation thesis". ACM SIGACT News 18 (1): 54–67. doi:10.1145/8312.8317.
4. ^ Goldschlager, Leslie M. (1982). "A universal interconnection pattern for parallel computers". Journal of the ACM 29 (3): 1073–1086. doi:10.1145/322344.322353.
5. ^ Chandra, Ashok K.; Kozen, Dexter C.; Stockmeyer, Larry J. (1981). "Alternation". Journal of the ACM 28 (1): 114–133. doi:10.1145/322234.322243.