Universality probability
This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
Background
A Turing machine is a basic model of computation. Some Turing machines might be specific to doing particular calculations. For example, a Turing machine might take input which comprises two numbers and then produce output which is the product of their multiplication. Another Turing machine might take input which is a list of numbers and then give output which is those numbers sorted in order.
A Turing machine which has the ability to simulate any other Turing machine is called universal - in other words, a Turing machine (TM) is said to be a universal Turing machine (or UTM) if, given any other TM, there is a some input (or "header") such that the first TM given that input "header" will forever after behave like the second TM.
An interesting mathematical and philosophical question then arises. If a universal Turing machine is given random input (for suitable definition of random), how probable is it that it remains universal forever?
Definition
Given a prefix-free Turing machine, the universality probability of it is the probability that it remains universal even when every input of it (as a binary string) is prefixed by a random binary string. More formally, it is the probability measure of reals (infinite binary sequences) which have the property that every initial segment of them preserves the universality of the given Turing machine. This notion was introduced by the computer scientist Chris Wallace and was first explicitly discussed in print in an article by Dowe[1] (and a subsequent article[2]). However, relevant discussions also appear in an earlier article by Wallace and Dowe.[3]
Universality probabilities of prefix-free UTMs are non-zero
Although the universality probability of a UTM (UTM) was originally suspected to be zero, relatively simple proofs exist that the supremum of the set of universality probabilities is equal to 1, such as a proof based on random walks[4] and a proof in Barmpalias and Dowe (2012). Once one has one prefix-free UTM with a non-zero universality probability, it immediately follows that all prefix-free UTMs have non-zero universality probability. Further, because the supremum of the set of universality probabilities is 1 and because the set { m/ 2n | 0 < n & 0 < m < 2n} is dense in the interval [0, 1], suitable constructions of UTMs (e.g., if U is a UTM, define a UTM U2 by U2(0s) halts for all strings s, U2(1s) = U(s) for all s) gives that the set of universality probabilities is dense in the open interval (0, 1).
Characterization and randomness of universality probability
Universality probability was thoroughly studied and characterized by Barmpalias and Dowe in 2012.[5] Seen as real numbers, these probabilities were completely characterized in terms of notions in computability theory and algorithmic information theory. It was shown that when the underlying machine is universal, these numbers are highly algorithmically random. More specifically, it is Martin-Löf random relative to the third iteration of the halting problem. In other words, they are random relative to null sets that can be defined with four quantifiers in Peano arithmetic. Vice versa, given such a highly random number[clarification needed] (with appropriate approximation properties) there is a Turing machine with universality probability that number.
Relation with Chaitin's constant
Universality probabilities are very related to the Chaitin constant, which is the halting probability of a universal prefix-free machine. In a sense, they are complementary to the halting probabilities of universal machines relative to the third iteration of the halting problem. In particular, the universality probability can be seen as the non-halting probability of a machine with oracle the third iteration of the halting problem. Vice versa, the non-halting probability of any prefix-free machine with this highly non-computable oracle is the universality probability of some prefix-free machine.
Probabilities of machines as examples of highly random numbers
Universality probability provides a concrete and somewhat natural example of a highly random number (in the sense of algorithmic information theory). In the same sense, Chaitin's constant provides a concrete example of a random number (but for a much weaker notion of algorithmic randomness).
See also
- Algorithmic probability
- History of randomness
- Incompleteness theorem
- Inductive inference
- Kolmogorov complexity
- Minimum message length
- Solomonoff's theory of inductive inference
References
- ^ *Dowe, D.L. (5 September 2008). "Foreword re C. S. Wallace". Computer Journal. 51 (5): 523–560. doi:10.1093/comjnl/bxm117. (and here)
- ^ *Dowe, D. L. (2011), "MML, hybrid Bayesian network graphical models, statistical consistency, invariance and uniqueness", Handbook of the Philosophy of Science - (HPS Volume 7) Philosophy of Statistics, P.S. Bandyopadhyay and M.R. Forster (eds.), Elsevier, pp901-982
- ^ Wallace, C. S. & Dowe, D. L. 1999 Minimum message length and Kolmogorov complexity Computer J. 42, 270–283
- ^ *Hernandez-Orallo, J. & Dowe, D. L. (2013), "On Potential Cognitive Abilities in the Machine Kingdom", Minds and Machines, Vol. 23, Issue 2, pp179-210
- ^ Barmpalias, G. and Dowe D.L. (2012). "Universality probability of a prefix-free machine". Philosophical Transactions of the Royal Society A. 370 (1): 3488–3511. Bibcode:2012RSPTA.370.3488B. CiteSeerX 10.1.1.221.6000. doi:10.1098/rsta.2011.0319. PMID 22711870.
External links
- Barmpalias, G. and Dowe D.L. (2012). "Universality probability of a prefix-free machine". Philosophical Transactions of the Royal Society A. 370 (1): 3488–3511 (Theme Issue 'The foundations of computation, physics and mentality: the Turing legacy' compiled and edited by Barry Cooper and Samson Abramsky). Bibcode:2012RSPTA.370.3488B. CiteSeerX 10.1.1.221.6000. doi:10.1098/rsta.2011.0319. PMID 22711870.
- Dowe, D.L. (5 September 2008). "Foreword re C. S. Wallace". Computer Journal. 51 (5): 523–560. doi:10.1093/comjnl/bxm117. (and here).
- Dowe, D. L. (2011), "MML, hybrid Bayesian network graphical models, statistical consistency, invariance and uniqueness", Handbook of the Philosophy of Science - (HPS Volume 7) Philosophy of Statistics, P.S. Bandyopadhyay and M.R. Forster (eds.), Elsevier, pp901-982.
- Wallace, C. S. & Dowe, D. L. 1999 Minimum message length and Kolmogorov complexity. Computer J. 42, 270–283.
- Hernandez-Orallo, J. & Dowe, D. L. (2013), "On Potential Cognitive Abilities in the Machine Kingdom", Minds and Machines, Vol. 23, Issue 2, pp179-210 (and here)
- Barmpalias, G. (June 2015), slides from talk entitled ``Randomness, probabilities and machines at the Tenth International Conference on Computability, Complexity and Randomness (CCR 2015) conference, 22–26 June 2015, Heidelberg, Germany.
- Cristian S. Calude, Michael J. Dinneen, and Chi-Kou Shu. Computing a Glimpse of Randomness.
Further reading
- Ming Li and Paul Vitányi (1997). An Introduction to Kolmogorov Complexity and Its Applications. Springer. Introduction chapter full-text.
- Cristian S. Calude (2002). Information and Randomness: An Algorithmic Perspective, second edition. Springer. ISBN 3-540-43466-6
- R. Downey, and D. Hirschfeldt (2010), Algorithmic Randomness and Complexity, Springer-Verlag.