Jump to content

Church–Turing thesis

From Wikipedia, the free encyclopedia
(Redirected from Church-Turing theorem)

In computability theory, the Church–Turing thesis (also known as computability thesis,[1] the Turing–Church thesis,[2] the Church–Turing conjecture, Church's thesis, Church's conjecture, and Turing's thesis) is a thesis about the nature of computable functions. It states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine. The thesis is named after American mathematician Alonzo Church and the British mathematician Alan Turing. Before the precise definition of computable function, mathematicians often used the informal term effectively calculable to describe functions that are computable by paper-and-pencil methods. In the 1930s, several independent attempts were made to formalize the notion of computability:

  • In 1933, Kurt Gödel, with Jacques Herbrand, formalized the definition of the class of general recursive functions: the smallest class of functions (with arbitrarily many arguments) that is closed under composition, recursion, and minimization, and includes zero, successor, and all projections.
  • In 1936, Alonzo Church created a method for defining functions called the λ-calculus. Within λ-calculus, he defined an encoding of the natural numbers called the Church numerals. A function on the natural numbers is called λ-computable if the corresponding function on the Church numerals can be represented by a term of the λ-calculus.
  • Also in 1936, before learning of Church's work,[6] Alan Turing created a theoretical model for machines, now called Turing machines, that could carry out calculations from inputs by manipulating symbols on a tape. Given a suitable encoding of the natural numbers as sequences of symbols, a function on the natural numbers is called Turing computable if some Turing machine computes the corresponding function on encoded natural numbers.

Church,[7] Kleene,[8] and Turing[9][11] proved that these three formally defined classes of computable functions coincide: a function is λ-computable if and only if it is Turing computable, and if and only if it is general recursive. This has led mathematicians and computer scientists to believe that the concept of computability is accurately characterized by these three equivalent processes. Other formal attempts to characterize computability have subsequently strengthened this belief (see below).

On the other hand, the Church–Turing thesis states that the above three formally-defined classes of computable functions coincide with the informal notion of an effectively calculable function. Although the thesis has near-universal acceptance, it cannot be formally proven, as the concept of effective calculability is only informally defined.

Since its inception, variations on the original thesis have arisen, including statements about what can physically be realized by a computer in our universe (physical Church-Turing thesis) and what can be efficiently computed (Church–Turing thesis (complexity theory)). These variations are not due to Church or Turing, but arise from later work in complexity theory and digital physics. The thesis also has implications for the philosophy of mind (see below).

Statement in Church's and Turing's words


J. B. Rosser (1939) addresses the notion of "effective computability" as follows: "Clearly the existence of CC and RC (Church's and Rosser's proofs) presupposes a precise definition of 'effective'. 'Effective method' is here used in the rather special sense of a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps".[12] Thus the adverb-adjective "effective" is used in a sense of "1a: producing a decided, decisive, or desired effect", and "capable of producing a result".[13][14]

In the following, the words "effectively calculable" will mean "produced by any intuitively 'effective' means whatsoever" and "effectively computable" will mean "produced by a Turing-machine or equivalent mechanical device". Turing's "definitions" given in a footnote in his 1938 Ph.D. thesis Systems of Logic Based on Ordinals, supervised by Church, are virtually the same:

We shall use the expression "computable function" to mean a function calculable by a machine, and let "effectively calculable" refer to the intuitive idea without particular identification with any one of these definitions.[15]

The thesis can be stated as: Every effectively calculable function is a computable function.[16] Church also stated that "No computational procedure will be considered as an algorithm unless it can be represented as a Turing Machine".[citation needed] Turing stated it this way:

It was stated ... that "a function is effectively calculable if its values can be found by some purely mechanical process". We may take this literally, understanding that by a purely mechanical process one which could be carried out by a machine. The development ... leads to ... an identification of computability with effective calculability. [ is the footnote quoted above.][15]



One of the important problems for logicians in the 1930s was the Entscheidungsproblem of David Hilbert and Wilhelm Ackermann,[17] which asked whether there was a mechanical procedure for separating mathematical truths from mathematical falsehoods. This quest required that the notion of "algorithm" or "effective calculability" be pinned down, at least well enough for the quest to begin.[18] But from the very outset Alonzo Church's attempts began with a debate that continues to this day.[19] Was[clarify] the notion of "effective calculability" to be (i) an "axiom or axioms" in an axiomatic system, (ii) merely a definition that "identified" two or more propositions, (iii) an empirical hypothesis to be verified by observation of natural events, or (iv) just a proposal for the sake of argument (i.e. a "thesis")?

Circa 1930–1952


In the course of studying the problem, Church and his student Stephen Kleene introduced the notion of λ-definable functions, and they were able to prove that several large classes of functions frequently encountered in number theory were λ-definable.[20] The debate began when Church proposed to Gödel that one should define the "effectively computable" functions as the λ-definable functions. Gödel, however, was not convinced and called the proposal "thoroughly unsatisfactory".[21] Rather, in correspondence with Church (c. 1934–1935), Gödel proposed axiomatizing the notion of "effective calculability"; indeed, in a 1935 letter to Kleene, Church reported that:

His [Gödel's] only idea at the time was that it might be possible, in terms of effective calculability as an undefined notion, to state a set of axioms which would embody the generally accepted properties of this notion, and to do something on that basis.[22]

But Gödel offered no further guidance. Eventually, he would suggest his recursion, modified by Herbrand's suggestion, that Gödel had detailed in his 1934 lectures in Princeton NJ (Kleene and Rosser transcribed the notes). But he did not think that the two ideas could be satisfactorily identified "except heuristically".[23]

Next, it was necessary to identify and prove the equivalence of two notions of effective calculability. Equipped with the λ-calculus and "general" recursion, Kleene with help of Church and J. Barkley Rosser produced proofs (1933, 1935) to show that the two calculi are equivalent. Church subsequently modified his methods to include use of Herbrand–Gödel recursion and then proved (1936) that the Entscheidungsproblem is unsolvable: there is no algorithm that can determine whether a well formed formula has a beta normal form.[24]

Many years later in a letter to Davis (c. 1965), Gödel said that "he was, at the time of these [1934] lectures, not at all convinced that his concept of recursion comprised all possible recursions".[25] By 1963–1964 Gödel would disavow Herbrand–Gödel recursion and the λ-calculus in favor of the Turing machine as the definition of "algorithm" or "mechanical procedure" or "formal system".[26]

A hypothesis leading to a natural law?: In late 1936 Alan Turing's paper (also proving that the Entscheidungsproblem is unsolvable) was delivered orally, but had not yet appeared in print.[27] On the other hand, Emil Post's 1936 paper had appeared and was certified independent of Turing's work.[28] Post strongly disagreed with Church's "identification" of effective computability with the λ-calculus and recursion, stating:

Actually the work already done by Church and others carries this identification considerably beyond the working hypothesis stage. But to mask this identification under a definition… blinds us to the need of its continual verification.[29]

Rather, he regarded the notion of "effective calculability" as merely a "working hypothesis" that might lead by inductive reasoning to a "natural law" rather than by "a definition or an axiom".[30] This idea was "sharply" criticized by Church.[31]

Thus Post in his 1936 paper was also discounting Kurt Gödel's suggestion to Church in 1934–1935 that the thesis might be expressed as an axiom or set of axioms.[22]

Turing adds another definition, Rosser equates all three: Within just a short time, Turing's 1936–1937 paper "On Computable Numbers, with an Application to the Entscheidungsproblem"[27] appeared. In it he stated another notion of "effective computability" with the introduction of his a-machines (now known as the Turing machine abstract computational model). And in a proof-sketch added as an "Appendix" to his 1936–1937 paper, Turing showed that the classes of functions defined by λ-calculus and Turing machines coincided.[32] Church was quick to recognise how compelling Turing's analysis was. In his review of Turing's paper he made clear that Turing's notion made "the identification with effectiveness in the ordinary (not explicitly defined) sense evident immediately".[33]

In a few years (1939) Turing would propose, like Church and Kleene before him, that his formal definition of mechanical computing agent was the correct one.[34] Thus, by 1939, both Church (1934) and Turing (1939) had individually proposed that their "formal systems" should be definitions of "effective calculability";[35] neither framed their statements as theses.

Rosser (1939) formally identified the three notions-as-definitions:

All three definitions are equivalent, so it does not matter which one is used.[36]

Kleene proposes Thesis I: This left the overt expression of a "thesis" to Kleene. In 1943 Kleene proposed his "Thesis I":[37]

This heuristic fact [general recursive functions are effectively calculable] ... led Church to state the following thesis. The same thesis is implicit in Turing's description of computing machines.

Thesis I. Every effectively calculable function (effectively decidable predicate) is general recursive [Kleene's italics]

Since a precise mathematical definition of the term effectively calculable (effectively decidable) has been wanting, we can take this thesis ... as a definition of it ...

...the thesis has the character of an hypothesis—a point emphasized by Post and by Church. If we consider the thesis and its converse as definition, then the hypothesis is an hypothesis about the application of the mathematical theory developed from the definition. For the acceptance of the hypothesis, there are, as we have suggested, quite compelling grounds.

The Church–Turing Thesis: Stephen Kleene, in Introduction To Metamathematics, finally goes on to formally name "Church's Thesis" and "Turing's Thesis", using his theory of recursive realizability. Kleene having switched from presenting his work in the terminology of Church-Kleene lambda definability, to that of Gödel-Kleene recursiveness (partial recursive functions). In this transition, Kleene modified Gödel's general recursive functions to allow for proofs of the unsolvability of problems in the Intuitionism of E. J. Brouwer. In his graduate textbook on logic, "Church's thesis" is introduced and basic mathematical results are demonstrated to be unrealizable. Next, Kleene proceeds to present "Turing's thesis", where results are shown to be uncomputable, using his simplified derivation of a Turing machine based on the work of Emil Post. Both theses are proven equivalent by use of "Theorem XXX".

Thesis I. Every effectively calculable function (effectively decidable predicate) is general recursive.[38]

Theorem XXX: The following classes of partial functions are coextensive, i.e. have the same members: (a) the partial recursive functions, (b) the computable functions ...[39]

Turing's thesis: Turing's thesis that every function which would naturally be regarded as computable is computable under his definition, i.e. by one of his machines, is equivalent to Church's thesis by Theorem XXX.[39]

Kleene, finally, uses for the first time the term the "Church-Turing thesis" in a section in which he helps to give clarifications to concepts in Alan Turing's paper "The Word Problem in Semi-Groups with Cancellation", as demanded in a critique from William Boone.[40]

Later developments


An attempt to understand the notion of "effective computability" better led Robin Gandy (Turing's student and friend) in 1980 to analyze machine computation (as opposed to human-computation acted out by a Turing machine). Gandy's curiosity about, and analysis of, cellular automata (including Conway's game of life), parallelism, and crystalline automata, led him to propose four "principles (or constraints) ... which it is argued, any machine must satisfy".[41] His most-important fourth, "the principle of causality" is based on the "finite velocity of propagation of effects and signals; contemporary physics rejects the possibility of instantaneous action at a distance".[42] From these principles and some additional constraints—(1a) a lower bound on the linear dimensions of any of the parts, (1b) an upper bound on speed of propagation (the velocity of light), (2) discrete progress of the machine, and (3) deterministic behavior—he produces a theorem that "What can be calculated by a device satisfying principles I–IV is computable."[43]

In the late 1990s Wilfried Sieg analyzed Turing's and Gandy's notions of "effective calculability" with the intent of "sharpening the informal notion, formulating its general features axiomatically, and investigating the axiomatic framework".[44] In his 1997 and 2002 work Sieg presents a series of constraints on the behavior of a computor—"a human computing agent who proceeds mechanically". These constraints reduce to:

  • "(B.1) (Boundedness) There is a fixed bound on the number of symbolic configurations a computor can immediately recognize.
  • "(B.2) (Boundedness) There is a fixed bound on the number of internal states a computor can be in.
  • "(L.1) (Locality) A computor can change only elements of an observed symbolic configuration.
  • "(L.2) (Locality) A computor can shift attention from one symbolic configuration to another one, but the new observed configurations must be within a bounded distance of the immediately previously observed configuration.
  • "(D) (Determinacy) The immediately recognizable (sub-)configuration determines uniquely the next computation step (and id [instantaneous description])"; stated another way: "A computor's internal state together with the observed configuration fixes uniquely the next computation step and the next internal state."[45]

The matter remains in active discussion within the academic community.[46][47]

The thesis as a definition


The thesis can be viewed as nothing but an ordinary mathematical definition. Comments by Gödel on the subject suggest this view, e.g. "the correct definition of mechanical computability was established beyond any doubt by Turing".[48] The case for viewing the thesis as nothing more than a definition is made explicitly by Robert I. Soare,[5] where it is also argued that Turing's definition of computability is no less likely to be correct than the epsilon-delta definition of a continuous function.

Success of the thesis


Other formalisms (besides recursion, the λ-calculus, and the Turing machine) have been proposed for describing effective calculability/computability. Kleene (1952) adds to the list the functions "reckonable in the system S1" of Kurt Gödel 1936, and Emil Post's (1943, 1946) "canonical [also called normal] systems".[49] In the 1950s Hao Wang and Martin Davis greatly simplified the one-tape Turing-machine model (see Post–Turing machine). Marvin Minsky expanded the model to two or more tapes and greatly simplified the tapes into "up-down counters", which Melzak and Lambek further evolved into what is now known as the counter machine model. In the late 1960s and early 1970s researchers expanded the counter machine model into the register machine, a close cousin to the modern notion of the computer. Other models include combinatory logic and Markov algorithms. Gurevich adds the pointer machine model of Kolmogorov and Uspensky (1953, 1958): "... they just wanted to ... convince themselves that there is no way to extend the notion of computable function."[50]

All these contributions involve proofs that the models are computationally equivalent to the Turing machine; such models are said to be Turing complete. Because all these different attempts at formalizing the concept of "effective calculability/computability" have yielded equivalent results, it is now generally assumed that the Church–Turing thesis is correct. In fact, Gödel (1936) proposed something stronger than this; he observed that there was something "absolute" about the concept of "reckonable in S1":

It may also be shown that a function which is computable ['reckonable'] in one of the systems Si, or even in a system of transfinite type, is already computable [reckonable] in S1. Thus the concept 'computable' ['reckonable'] is in a certain definite sense 'absolute', while practically all other familiar metamathematical concepts (e.g. provable, definable, etc.) depend quite essentially on the system to which they are defined ...[51]

Informal usage in proofs


Proofs in computability theory often invoke the Church–Turing thesis in an informal way to establish the computability of functions while avoiding the (often very long) details which would be involved in a rigorous, formal proof.[52] To establish that a function is computable by Turing machine, it is usually considered sufficient to give an informal English description of how the function can be effectively computed, and then conclude "by the Church–Turing thesis" that the function is Turing computable (equivalently, partial recursive).

Dirk van Dalen gives the following example for the sake of illustrating this informal use of the Church–Turing thesis:[53]

Example: Each infinite RE set contains an infinite recursive set.

Proof: Let A be infinite RE. We list the elements of A effectively, n0, n1, n2, n3, ...

From this list we extract an increasing sublist: put m0 = n0, after finitely many steps we find an nk such that nk > m0, put m1 = nk. We repeat this procedure to find m2 > m1, etc. this yields an effective listing of the subset B={m0, m1, m2,...} of A, with the property mi < mi+1.

Claim. B is decidable. For, in order to test k in B we must check if k = mi for some i. Since the sequence of mi's is increasing we have to produce at most k+1 elements of the list and compare them with k. If none of them is equal to k, then k not in B. Since this test is effective, B is decidable and, by Church's thesis, recursive.

In order to make the above example completely rigorous, one would have to carefully construct a Turing machine, or λ-function, or carefully invoke recursion axioms, or at best, cleverly invoke various theorems of computability theory. But because the computability theorist believes that Turing computability correctly captures what can be computed effectively, and because an effective procedure is spelled out in English for deciding the set B, the computability theorist accepts this as proof that the set is indeed recursive.



The success of the Church–Turing thesis prompted variations of the thesis to be proposed. For example, the physical Church–Turing thesis states: "All physically computable functions are Turing-computable."[54]: 101 

The Church–Turing thesis says nothing about the efficiency with which one model of computation can simulate another. It has been proved for instance that a (multi-tape) universal Turing machine only suffers a logarithmic slowdown factor in simulating any Turing machine.[55]

A variation of the Church–Turing thesis addresses whether an arbitrary but "reasonable" model of computation can be efficiently simulated. This is called the feasibility thesis,[56] also known as the (classical) complexity-theoretic Church–Turing thesis or the extended Church–Turing thesis, which is not due to Church or Turing, but rather was realized gradually in the development of complexity theory. It states:[57] "A probabilistic Turing machine can efficiently simulate any realistic model of computation." The word 'efficiently' here means up to polynomial-time reductions. This thesis was originally called computational complexity-theoretic Church–Turing thesis by Ethan Bernstein and Umesh Vazirani (1997). The complexity-theoretic Church–Turing thesis, then, posits that all 'reasonable' models of computation yield the same class of problems that can be computed in polynomial time. Assuming the conjecture that probabilistic polynomial time (BPP) equals deterministic polynomial time (P), the word 'probabilistic' is optional in the complexity-theoretic Church–Turing thesis. A similar thesis, called the invariance thesis, was introduced by Cees F. Slot and Peter van Emde Boas. It states: "'Reasonable' machines can simulate each other within a polynomially bounded overhead in time and a constant-factor overhead in space."[58] The thesis originally appeared in a paper at STOC'84, which was the first paper to show that polynomial-time overhead and constant-space overhead could be simultaneously achieved for a simulation of a Random Access Machine on a Turing machine.[59]

If BQP is shown to be a strict superset of BPP, it would invalidate the complexity-theoretic Church–Turing thesis. In other words, there would be efficient quantum algorithms that perform tasks that do not have efficient probabilistic algorithms. This would not however invalidate the original Church–Turing thesis, since a quantum computer can always be simulated by a Turing machine, but it would invalidate the classical complexity-theoretic Church–Turing thesis for efficiency reasons. Consequently, the quantum complexity-theoretic Church–Turing thesis states:[57] "A quantum Turing machine can efficiently simulate any realistic model of computation."

Eugene Eberbach and Peter Wegner claim that the Church–Turing thesis is sometimes interpreted too broadly, stating "Though [...] Turing machines express the behavior of algorithms, the broader assertion that algorithms precisely capture what can be computed is invalid".[60] They claim that forms of computation not captured by the thesis are relevant today, terms which they call super-Turing computation.

Philosophical implications


Philosophers have interpreted the Church–Turing thesis as having implications for the philosophy of mind.[61][62][63] B. Jack Copeland states that it is an open empirical question whether there are actual deterministic physical processes that, in the long run, elude simulation by a Turing machine; furthermore, he states that it is an open empirical question whether any such processes are involved in the working of the human brain.[64] There are also some important open questions which cover the relationship between the Church–Turing thesis and physics, and the possibility of hypercomputation. When applied to physics, the thesis has several possible meanings:

  1. The universe is equivalent to a Turing machine; thus, computing non-recursive functions is physically impossible. This has been termed the strong Church–Turing thesis, or Church–Turing–Deutsch principle, and is a foundation of digital physics.
  2. The universe is not equivalent to a Turing machine (i.e., the laws of physics are not Turing-computable), but incomputable physical events are not "harnessable" for the construction of a hypercomputer. For example, a universe in which physics involves random real numbers, as opposed to computable reals, would fall into this category.
  3. The universe is a hypercomputer, and it is possible to build physical devices to harness this property and calculate non-recursive functions. For example, it is an open question whether all quantum mechanical events are Turing-computable, although it is known that rigorous models such as quantum Turing machines are equivalent to deterministic Turing machines. (They are not necessarily efficiently equivalent; see above.) John Lucas and Roger Penrose have suggested that the human mind might be the result of some kind of quantum-mechanically enhanced, "non-algorithmic" computation.[65][66]

There are many other technical possibilities which fall outside or between these three categories, but these serve to illustrate the range of the concept.

Philosophical aspects of the thesis, regarding both physical and biological computers, are also discussed in Odifreddi's 1989 textbook on recursion theory.[67]: 101–123 

Non-computable functions


One can formally define functions that are not computable. A well-known example of such a function is the Busy Beaver function. This function takes an input n and returns the largest number of symbols that a Turing machine with n states can print before halting, when run with no input. Finding an upper bound on the busy beaver function is equivalent to solving the halting problem, a problem known to be unsolvable by Turing machines. Since the busy beaver function cannot be computed by Turing machines, the Church–Turing thesis states that this function cannot be effectively computed by any method.

Several computational models allow for the computation of (Church-Turing) non-computable functions. These are known as hypercomputers. Mark Burgin argues that super-recursive algorithms such as inductive Turing machines disprove the Church–Turing thesis.[68][page needed] His argument relies on a definition of algorithm broader than the ordinary one, so that non-computable functions obtained from some inductive Turing machines are called computable. This interpretation of the Church–Turing thesis differs from the interpretation commonly accepted in computability theory, discussed above. The argument that super-recursive algorithms are indeed algorithms in the sense of the Church–Turing thesis has not found broad acceptance within the computability research community.[citation needed]

See also



  1. ^ Robert Soare, "Turing Oracle Machines, Online Computing, and Three Displacements in Computability Theory"
  2. ^ Rabin, Michael O. (June 2012). Turing, Church, Gödel, Computability, Complexity and Randomization: A Personal View. Event occurs at 9:36. Retrieved 2021-12-05.
  3. ^ Correspondence between Max Newman and Church in Alonzo Church papers
  4. ^ Turing, Alan (2004). The essential Turing : seminal writings in computing, logic, philosophy, artificial intelligence, and artificial life, plus the secrets of Enigma (PDF). Oxford: Clarendon Press. p. 44. ISBN 9780198250791. Retrieved 2021-12-06.
  5. ^ a b Soare, Robert I. (September 1996). "Computability and Recursion". Bulletin of Symbolic Logic. 2 (3): 284–321. CiteSeerX doi:10.2307/420992. JSTOR 420992. S2CID 5894394.
  6. ^ Church's paper was presented to the American Mathematical Society on 19 April 1935 and published on 15 April 1936. Turing, who had made substantial progress in writing up his own results, was disappointed to learn of Church's proof upon its publication.[3][4] Turing quickly completed his paper and rushed it to publication; it was received by the Proceedings of the London Mathematical Society on 28 May 1936, read on 12 November 1936, and published in series 2, volume 42 (1936–1937); it appeared in two sections: in Part 3 (pages 230–240), issued on 30 November 1936 and in Part 4 (pages 241–265), issued on 23 December 1936; Turing added corrections in volume 43 (1937), pp. 544–546.[5]: 45 
  7. ^ Church 1936a
  8. ^ Kleene 1936
  9. ^ Turing 1937a
  10. ^ Kleene 1936
  11. ^ Turing 1937b. Proof outline on p. 153: [10]
  12. ^ Rosser 1939 in Davis 1965:225.
  13. ^ "effective". Merriam Webster's New Collegiate Dictionary (9th ed.).
  14. ^ See also "effective". Merriam-Webster's Online Dictionary (11th ed.). Retrieved 2014-07-26, which also gives these definitions for "effective" – the first ["producing a decided, decisive, or desired effect"] as the definition for sense "1a" of the word "effective", and the second ["capable of producing a result"] as part of the "Synonym Discussion of EFFECTIVE" there, (in the introductory part, where it summarizes the similarities between the meanings of the words "effective", "effectual", "efficient", and "efficacious").
  15. ^ a b Turing, A. M. (1938). Systems of Logic Based on Ordinals (PDF) (PhD). Princeton University. p. 8. Archived from the original (PDF) on 2012-10-23. Retrieved 2012-06-23.
  16. ^ Gandy (1980:123) states it this way: What is effectively calculable is computable. He calls this "Church's Thesis".
  17. ^ David Hilbert and Wilhelm Ackermann: Grundzüge der theoretischen Logik, Berlin, Germany, Springer, 1st ed. 1928. (6th ed. 1972, ISBN 3-540-05843-5) English Translation: David Hilbert and Wilhelm Ackermann: Principles of Mathematical Logic, AMS Chelsea Publishing, Providence, Rhode Island, USA, 1950.
  18. ^ Davis's commentary before Church 1936 An Unsolvable Problem of Elementary Number Theory in Davis 1965:88. Church uses the words "effective calculability" on page 100ff.
  19. ^ In his review of Church's Thesis after 70 Years edited by Adam Olszewski et al. 2006, Peter Smith's criticism of a paper by Muraswski and Wolenski suggests 4 "lines" re the status of the Church–Turing Thesis: (1) empirical hypothesis (2) axiom or theorem, (3) definition, (4) explication. But Smith opines that (4) is indistinguishable from (3), cf. Smith (2007-07-11) Church's Thesis after 70 Years at http://www.logicmatters.net/resources/pdfs/CTT.pdf
  20. ^ cf. footnote 3 in Church 1936a An Unsolvable Problem of Elementary Number Theory in Davis 1965:89.
  21. ^ Dawson 1997:99.
  22. ^ a b Sieg 1997:160.
  23. ^ Sieg 1997:160 quoting from the 1935 letter written by Church to Kleene, cf. Footnote 3 in Gödel 1934 in Davis 1965:44.
  24. ^ cf. Church 1936 in Davis 1965:105ff..
  25. ^ Davis's commentary before Gödel 1934 in Davis 1965:40.
  26. ^ For a detailed discussion of Gödel's adoption of Turing's machines as models of computation, see Shagrir, Oron (2006-06-15). "Gödel on Turing on Computability" (PDF). Church's Thesis After 70 Years. De Gruyter. doi:10.1515/9783110325461.393. ISBN 978-3-11-032494-5. Archived from the original (PDF) on 2015-12-17. Retrieved 2016-02-08.
  27. ^ a b Turing 1937a.
  28. ^ cf. Editor's footnote to Post 1936 Finite Combinatory Process. Formulation I. at Davis 1965:289.
  29. ^ Post 1936 in Davis 1965:291, footnote 8.
  30. ^ Post 1936 in Davis 1952:291.
  31. ^ Sieg 1997:171 and 176–177.
  32. ^ Turing 1936–1937 in Davis 1965:263ff..
  33. ^ Church 1937.
  34. ^ Turing 1939 in Davis:160.
  35. ^ cf. Church 1934 in Davis 1965:100, also Turing 1939 in Davis 1965:160.
  36. ^ italics added, Rosser 1939 in Davis 1965:226.
  37. ^ Kleene 1943, p. 60 in Davis 1965:274. Footnotes omitted.
  38. ^ Kleene 1952:300.
  39. ^ a b Kleene 1952:376.
  40. ^ Kleene 1952:382, 536
  41. ^ Gandy 1980:123ff.
  42. ^ Gandy 1980:135
  43. ^ Gandy 1980:126
  44. ^ Sieg 1998–1999 in Sieg, Sommer & Talcott 2002:390ff.; also Sieg 1997:154ff.
  45. ^ In a footnote Sieg breaks Post's 1936 (B) into (B.1) and (B.2) and (L) into (L.1) and (L.2) and describes (D) differently. With respect to his proposed Gandy machine he later adds LC.1, LC.2, GA.1 and GA.2. These are complicated; see Sieg 1998–1999 in Sieg, Sommer & Talcott 2002:390ff..
  46. ^ A collection of papers can be found in Olszewski, Woleński & Janusz (2006). Also a review of this collection: Smith, Peter (2007-07-11). "Church's Thesis after 70 Years" (PDF).
  47. ^ See also Hodges, Andrew (2005). "Did Church and Turing Have a Thesis about Machines?" (PDF). Archived from the original (PDF) on 2016-03-04. Retrieved 2014-07-27.
  48. ^ Gödel, Kurt (1995) [193?]. "Undecidable Diophantine Propositions". In Feferman, Solomon (ed.). Collected Works. Vol. 3. New York: Oxford University Press. p. 168. ISBN 978-0-19-507255-6. OCLC 928791907.
  49. ^ Kleene 1952:320
  50. ^ Gurevich 1988:2
  51. ^ Translation of Gödel (1936) by Davis in The Undecidable p. 83, differing in the use of the word 'reckonable' in the translation in Kleene (1952) p. 321
  52. ^ Horsten in Olszewski, Woleński & Janusz 2006:256.
  53. ^ Gabbay 2001:284
  54. ^ Piccinini, Gualtiero (January 2007). "Computationalism, the Church–Turing Thesis, and the Church–Turing Fallacy" (PDF). Synthese. 154 (1): 97–120. CiteSeerX doi:10.1007/s11229-005-0194-z. S2CID 494161. Archived (PDF) from the original on 2008-04-24.
  55. ^ Arora, Sanjeev; Barak, Boaz (2009). Complexity Theory: A Modern Approach. Cambridge University Press. ISBN 978-0-521-42426-4. Sections 1.4, "Machines as strings and the universal Turing machine" and 1.7, "Proof of theorem 1.9".
  56. ^ "Official Problem Description" (PDF). Archived from the original (PDF) on 2005-11-24.
  57. ^ a b Kaye, Phillip; Laflamme, Raymond; Mosca, Michele (2007). An introduction to quantum computing. Oxford University Press. pp. 5–6. ISBN 978-0-19-857049-3.
  58. ^ van Emde Boas, Peter (1990). "Machine Models and Simulations". Handbook of Theoretical Computer Science A. Elsevier. p. 5.
  59. ^ Slot, C.; van Emde Boas, P. (December 1984). On tape versus core: an application of space efficient perfect hash functions to the invariance of space. STOC.
  60. ^ Eberbach & Wegner 2003, p. 287.
  61. ^ Abramson, Darren (2011). "Philosophy of mind is (in part) philosophy of computer science". Minds and Machines. 21 (2): 203–219. doi:10.1007/s11023-011-9236-0. S2CID 32116031.
  62. ^ Copeland, B. Jack (2017-11-10). "The Church-Turing Thesis". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
  63. ^ For a good place to encounter original papers see Chalmers, David J., ed. (2002). Philosophy of Mind: Classical and Contemporary Readings. New York: Oxford University Press. ISBN 978-0-19-514581-6. OCLC 610918145.
  64. ^ Copeland, B. Jack (2004). "Computation". In Floridi, Luciano (ed.). The Blackwell guide to the philosophy of computing and information. Wiley-Blackwell. p. 15. ISBN 978-0-631-22919-3.
  65. ^ cf. Penrose, Roger (1990). "Algorithms and Turing machines". The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford: Oxford University Press. pp. 47–49. ISBN 978-0-19-851973-7. OCLC 456785846.
  66. ^ Also the description of "the non-algorithmic nature of mathematical insight", Penrose, Roger (1990). "Where lies the physics of mind?". The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford: Oxford University Press. pp. 416–418. ISBN 978-0-19-851973-7. OCLC 456785846.
  67. ^ Piergiorgio Odifreddi (1989). Classical Recursion Theory. Studies in Logic and the Foundations of Mathematics. Vol. 125. Amsterdam, Netherlands: North Holland.
  68. ^ Burgin, Mark (2005). Super-Recursive Algorithms. Monographs in Computer Science. New York: Springer. ISBN 978-0-387-95569-8. OCLC 990755791.