From Wikipedia, the free encyclopedia
Jump to: navigation, search

Hypercomputation or super-Turing computation refers to models of computation that go beyond Turing computability. This includes various hypothetical methods for the computation of non-Turing-computable functions. The term "super-Turing computation" first appeared in talks, the PhD thesis,[1] and even earlier technical reports of Hava Siegelmann since the early 1990s for a very particular theory (described below) and became a subfield of computation since her seminal 1995 Science paper.[2] The term "hypercomputation" was introduced in 1999 by Jack Copeland and Diane Proudfoot.[3]

The terms are not quite synonymous: "super-Turing computation" by Siegelmann usually implies that the proposed model is likely to exist in nature (via biology and/or physics) and may thus be physically realizable, while "hypercomputation" may be a general mathematical or philosophical idea.


A computational model going beyond Turing machines was introduced by Alan Turing in his 1938 PhD dissertation Systems of Logic Based on Ordinals.[4] This paper investigated mathematical systems in which an oracle was available, which could compute a single arbitrary (non-recursive) function from naturals to naturals. He used this device to prove that even in those more powerful systems, undecidability is still present. Turing's oracle machines are mathematical abstractions, and are not physically realizable.[5]

Hypercomputation and the Church–Turing thesis[edit]

The Church–Turing thesis states that any algorithmically computable function can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot and which are, hence, not computable in the Church-Turing sense.

An example of a problem a Turing machine cannot solve is the halting problem. A Turing machine cannot decide if an arbitrary program halts or runs forever. Some proposed hypercomputers could simulate the program for an infinite number of steps and tell the user whether the program halted. However, Siegelmann showed in her 1993 PhD thesis,[1][page needed] and later in her 1998 book,[6][page needed] that the halting problem of Turing machines can be solved with analog recurrent neural networks.

Super-Turing Computation and the Super-Turing thesis[edit]

In the early 1990s Hava Siegelmann and Eduardo Sontag proved that their new computational model, the Artificial Recurrent Neural Network (ARNN), could perform beyond the Turing limit (hypercomputation)[7][8]

Siegelmann and colleagues revealed a hierarchy of well established computational classes that start at the Turing machine and ascend to the full super-Turing power[clarify].[9] Siegelmann described in various publications that there are many ways to reach super-Turing computation. While first she showed the existence of Super-Turing computation via analog neural networks with fixed (non-learning) and unbounded weights, she went on to prove super-Turing power in many varied realizable machines including small precision weight neural networks that receive their power from: learning in analog domain, or using stochasticity,[6] or evolving over time,[10] or use data from environment, or systems that are Turing in any computable steps and in between change by one of the above methods. See also work in Verifying Properties of Neural Networks

Siegelmann's Science paper demonstrated a first step of realizing this computation[2] while current efforts exist[vague] by physicists and engineers to build such systems.[clarification needed] Siegelmann's recent research proves that the state space of Super-Turing computation is 2^{\aleph_0} while the space of Turing machines is \aleph_0 only, giving a new understanding to the Turing test - as separating spaces of different size [11]

The theory led to better understanding of neural networks (the foundation of deep-learning) and supported innovative applications in such sensitive areas as radar registration and the control of nuclear plants. [12] [13] [14] [15] Siegelmann and colleagues went further to create a complexity theory for continuous time and physical systems.[16][17] As a particular example Siegelmann's group analyzed linear programming and other computer-science problems showing that analog computers can solve these faster than discrete time computers [18] [19]

A recent publication reveals that Turing himself has been searching for Super-Turing computation based on brain principles, and show his suggested directions how to look for it perfectly match with Siegelmann's theories. [20] Siegelmann and Sontag proposed a new computational hypothesis - whereas analog, learning, and evolving systems are constrained by the Super-Turing computational power.

Other Hypercomputer proposals[edit]

  • A Turing machine that can complete infinitely many steps. Simply being able to run for an unbounded number of steps does not suffice. One mathematical model is the Zeno machine (inspired by Zeno's paradox). The Zeno machine performs its first computation step in (say) 1 minute, the second step in ½ minute, the third step in ¼ minute, etc. By summing 1+½+¼+... (a geometric series) we see that the machine performs infinitely many steps in a total of 2 minutes. According to Shagrir, Zeno machines introduce physical paradoxes and its state is logically undefined outside of one-side open period of [0, 2), thus undefined exactly at 2 minutes after beginning of the computation.[21]
  • Turing's original oracle machines, defined by Turing in 1939.
  • In mid 1960s, E Mark Gold and Hilary Putnam independently proposed models of inductive inference (the "limiting recursive functionals"[22] and "trial-and-error predicates",[23] respectively). These models enable some nonrecursive sets of numbers or languages (including all recursively enumerable sets of languages) to be "learned in the limit"; whereas, by definition, only recursive sets of numbers or languages could be identified by a Turing machine. While the machine will stabilize to the correct answer on any learnable set in some finite time, it can only identify it as correct if it is recursive; otherwise, the correctness is established only by running the machine forever and noting that it never revises its answer. Putnam identified this new interpretation as the class of "empirical" predicates, stating: "if we always 'posit' that the most recently generated answer is correct, we will make a finite number of mistakes, but we will eventually get the correct answer. (Note, however, that even if we have gotten to the correct answer (the end of the finite sequence) we are never sure that we have the correct answer.)"[23] L. K. Schubert's 1974 paper "Iterated Limiting Recursion and the Program Minimization Problem" [24] studied the effects of iterating the limiting procedure; this allows any arithmetic predicate to be computed. Schubert wrote, "Intuitively, iterated limiting identification might be regarded as higher-order inductive inference performed collectively by an ever-growing community of lower order inductive inference machines."
  • A real computer (a sort of idealized analog computer) can perform hypercomputation[25] if physics admits general real variables (not just computable reals), and these are in some way "harnessable" for computation. This might require quite bizarre laws of physics (for example, a measurable physical constant with an oracular value, such as Chaitin's constant), and would at minimum require the ability to measure a real-valued physical value to arbitrary precision despite thermal noise and quantum effects.
  • A proposed technique known as fair nondeterminism or unbounded nondeterminism may allow the computation of noncomputable functions.[26] There is dispute in the literature over whether this technique is coherent, and whether it actually allows noncomputable functions to be "computed".
  • It seems natural that the possibility of time travel (existence of closed timelike curves (CTCs)) makes hypercomputation possible by itself. However, this is not so since a CTC does not provide (by itself) the unbounded amount of storage that an infinite computation would require. Nevertheless, there are spacetimes in which the CTC region can be used for relativistic hypercomputation.[27] Access to a CTC may allow the rapid solution to PSPACE-complete problems, a complexity class which, while Turing-decidable, is generally considered computationally intractable.[28][29]
  • According to a 1992 paper,[30] a computer operating in a Malament-Hogarth spacetime or in orbit around a rotating black hole[31] could theoretically perform non-Turing computations.[32][33]
  • The infinite time Turing machine is a generalization of the Zeno machine, that can perform infinitely long computations whose steps are enumerated by potentially transfinite ordinal numbers. It models an otherwise-ordinary Turing machine for which non-halting computations are completed by entering a special state reserved for reaching a limit ordinal and to which the results of the preceding infinite computation are available.[34]
  • Jan van Leeuwen and Jiří Wiedermann wrote a paper[35] suggesting that the Internet should be modeled as a nonuniform computing system equipped with an advice function representing the ability of computers to be upgraded.
  • A symbol sequence is computable in the limit if there is a finite, possibly non-halting program on a universal Turing machine that incrementally outputs every symbol of the sequence. This includes the dyadic expansion of π and of every other computable real, but still excludes all noncomputable reals. Traditional Turing machines cannot edit their previous outputs; generalized Turing machines, as defined by Jürgen Schmidhuber, can. He defines the constructively describable symbol sequences as those that have a finite, non-halting program running on a generalized Turing machine, such that any output symbol eventually converges; that is, it does not change any more after some finite initial time interval. Due to limitations first exhibited by Kurt Gödel (1931), it may be impossible to predict the convergence time itself by a halting program, otherwise the halting problem could be solved. Schmidhuber ([36][37]) uses this approach to define the set of formally describable or constructively computable universes or constructive theories of everything. Generalized Turing machines can solve the halting problem by evaluating a Specker sequence.
  • A quantum mechanical system which somehow uses an infinite superposition of states to compute a non-computable function.[38] This is not possible using the standard qubit-model quantum computer, because it is proven that a regular quantum computer is PSPACE-reducible (a quantum computer running in polynomial time can be simulated by a classical computer running in polynomial space).[39]
  • In 1970, E.S. Santos defined a class of fuzzy logic-based "fuzzy algorithms" and "fuzzy Turing machines".[40] Subsequently, L. Biacino and G. Gerla showed that such a definition would allow the computation of nonrecursive languages; they suggested an alternative set of definitions without this difficulty.[41] Jiří Wiedermann analyzed the capabilities of Santos' original proposal in 2004.[42]
  • Dmytro Taranovsky has proposed a finitistic model of traditionally non-finitistic branches of analysis, built around a Turing machine equipped with a rapidly increasing function as its oracle. By this and more complicated models he was able to give an interpretation of second-order arithmetic.[43]

Analysis of capabilities[edit]

Many hypercomputation proposals amount to alternative ways to read an oracle or advice function embedded into an otherwise classical machine. Others allow access to some higher level of the arithmetic hierarchy. For example, supertasking Turing machines, under the usual assumptions, would be able to compute any predicate in the truth-table degree containing \Sigma^0_1 or \Pi^0_1. Limiting-recursion, by contrast, can compute any predicate or function in the corresponding Turing degree, which is known to be \Delta^0_2. Gold further showed that limiting partial recursion would allow the computation of precisely the \Sigma^0_2 predicates.

Model Computable predicates Notes Refs
supertasking tt(\Sigma^0_1, \Pi^0_1) dependent on outside observer [44]
limiting/trial-and-error  \Delta^0_2 [22]
iterated limiting (k times)  \Delta^0_{k+1} [24]
Blum-Shub-Smale machine incomparable with traditional computable real functions [45]
Malament-Hogarth spacetime HYP dependent on spacetime structure [46]
analog recurrent neural network  \Delta^0_1[f] f is an advice function giving connection weights; size is bounded by runtime [2][47]
infinite time Turing machine  \ge T(\Sigma^1_1) [48]
classical fuzzy Turing machine  \Sigma^0_1 \cup \Pi^0_1 for any computable t-norm [42]
increasing function oracle  \Delta^1_1 for the one-sequence model;  \Pi^1_1 are r.e. [43]

Taxonomy of "super-recursive" computation methodologies[edit]

Mark Burgin has collected a list of what he calls "super-recursive algorithms" (from Burgin 2005: 132):

  • limiting recursive functions and limiting partial recursive functions (E. M. Gold[22])
  • trial and error predicates (Hilary Putnam[23])
  • inductive inference machines (Carl Herbert Smith)
  • inductive Turing machines (one of Burgin's own models)
  • limit Turing machines (another of Burgin's models)
  • trial-and-error machines (Ja. Hintikka and A. Mutanen[49])
  • general Turing machines (J. Schmidhuber[37])
  • Internet machines (van Leeuwen, J. and Wiedermann, J.[35])
  • evolutionary computers, which use DNA to produce the value of a function (Darko Roglic[50])
  • fuzzy computation (Jiří Wiedermann[42])
  • evolutionary Turing machines (Eugene Eberbach[51])

In the same book, he presents also a list of "algorithmic schemes":

  • Turing machines with arbitrary oracles (Alan Turing)
  • transrecursive operators (Borodyanskii and Burgin[52])
  • machines that compute with real numbers (L. Blum, F. Cucker, M. Shub, and S. Smale)
  • Static neural networks based on real weights or equivalently "Neural networks of finite precision weights but with asynchronous update, stochastic coin, evolving or learning (weights and/or structure). (Hava Siegelmann)


Martin Davis, in his writings on hypercomputation[53][54] refers to this subject as "a myth" and offers counter-arguments to the physical realizability of hypercomputation. As for its theory, he argues against the claims that this is a new field founded in the 1990s. This point of view relies on the history of computability theory (degrees of unsolvability, computability over functions, real numbers and ordinals), as also mentioned above. In his argument he makes a remark that all of hypercomputation is trivial as : " if non computable inputs are permitted then non computable outputs are attainable." It is widely accepted that this criticism refers to earliest mathematical and philosophical suggestions and ignores much of[vague] the newer proposals that are not subject to the criticism.

Andrew Hodges wrote a critical commentary[55] on Copeland and Proudfoot's article.[3]

See also[edit]


  1. ^ a b Hava Siegelmann (Oct 1993). Foundations of Recurrent Neural Networks (Ph.D. thesis). Rutgers, The State University of New Jersey. 
  2. ^ a b c H.T. Siegelmann (Apr 1995). "Computation Beyond the Turing Limit" (PDF). Science 268 (5210): 545—548. doi:10.1126/science.268.5210.545. PMID 17756722.  (Work received wide media attention, and was mentioned[vague] as founding the field of HyperComputation.)
  3. ^ a b Copeland and Proudfoot, Alan Turing's forgotten ideas in computer science. Scientific American, April 1999
  4. ^ Alan Turing, 1939, Systems of Logic Based on Ordinals Proceedings London Mathematical Society Volumes 2–45, Issue 1, pp. 161–228.[1]
  5. ^ "Let us suppose that we are supplied with some unspecified means of solving number-theoretic problems; a kind of oracle as it were. We shall not go any further into the nature of this oracle apart from saying that it cannot be a machine" (Undecidable p. 167, a reprint of Turing's paper Systems of Logic Based On Ordinals)
  6. ^ a b H.T. Siegelmann, "Neural Networks and Analog Computation: Beyond the Turing Limit", Birkhauser, Boston, December 1998
  7. ^ H.T. Siegelmann and E.D. Sontag (1995). "Computational Power of Neural Networks" (PDF). Journal of Computer System Sciences 50 (1): 132—150.  (Work described as most fundamental theorem about neural networks in Simon Haykin’s book of Neural Networks.)
  8. ^ H.T. Siegelmann and E.D. Sontag (1994). "Analog Computation via Neural Networks" (PDF). Theoretical Computer Science 131: 331—360. doi:10.1016/0304-3975(94)90178-3.  (Work described as the fundamental theorem differentiating neural networks from classical computers in Simon Haykin’s book of Neural Networks; is cited in the field and in the media.)
  9. ^ J.L. Balcázar and R. Gavaldà and H.T. Siegelmann (Jul 1997). "Computational Power of Neural Networks: A Characterization in Terms of Kolmogorov Complexity" (PDF). IEEE Transactions on Information Theory 43 (4): 1175—1183. 
  10. ^ J. Cabessa and H. T. Siegelmann (2014). "The Super-Turing Computational Power of Plastic Recurrent Neural Networks" (PDF). International Journal of Neural Systems 24 (8). 
  11. ^ J. Cabessa and H.T. Siegelmann (Apr 2012). "The Computational Power of Interactive Recurrent Neural Networks" (PDF). Neural Computation 24 (4): 996—1019. 
  12. ^ J.P. Neto, H.T. Siegelmann, and J.F. Costa, “Implementation of Programming Languages with Neural Nets,” International Journal of Computing Anticipatory Systems 1, 1997: 201-208
  13. ^ J. Kilian and H.T. Siegelmann, “The Dynamic Universality of Sigmoidal Neural Networks,” Information and Computation 128(1), 1996: 45-56.
  14. ^ H.T. Siegelmann, B.G. Horne, and C.L.Giles, “Computational Capabilities of Recurrent NARX Neural Networks,” IEEE Transaction on Systems, Man and Cybernetics – part B: Cybernetics 27(2), 1997: 208-215.
  15. ^ 47. H.T. Siegelmann, E. Nissan, and A. Galperin, “A Novel Neural/Symbolic Hybrid Approach to Heuristically Optimized Fuel Allocation and Automated Revision of Heuristics in Nuclear Engineering,” Advances in Engineering Software 28(9), 1997: 581-592.
  16. ^ H.T. Siegelmann, A. Ben-Hur and S. Fishman, “Computational Complexity for Continuous Time Dynamics,” Physical Review Letters, 83(7), 1999: 1463-1466.
  17. ^ H.T. Siegelmann and S. Fishman, “Computation by Dynamical Systems,” Physica D 120, 1998 (1-2): 214-235.
  18. ^ A. Ben-Hur, J. Feinberg, S. Fishman and H. T. Siegelmann, “Random matrix theory for the analysis of the performance of an analog computer: a scaling theory,” Physics Letters A. 323(3-4), March 2004: 204-209.
  19. ^ A. Ben-Hur, J. Feinberg, S. Fishman and H. T. Siegelmann, “Probabilistic analysis of a differential equation for linear programming,” Journal of Complexity 19(4), August 2003: 474-510.
  20. ^ H. T. Siegelmann, “Turing on Super-Turing and Adaptivity”. J. Progress in Biophysics & Molecular Biology. April (Sep) 2013, 113(1):117-26. doi: 10.1016/j.pbiomolbio.2013.03.013.
  21. ^ These models have been independently developed by many different authors, including Hermann Weyl (1927). Philosophie der Mathematik und Naturwissenschaft. ; the model is discussed in Shagrir, O. (June 2004). "Super-tasks, accelerating Turing machines and uncomputability" (PDF). Theor. Comput. Sci. 317: 105–114. doi:10.1016/j.tcs.2003.12.007.  and in Petrus H. Potgieter (July 2006). "Zeno machines and hypercomputation". Theoretical Computer Science 358 (1): 23–33. doi:10.1016/j.tcs.2005.11.040. 
  22. ^ a b c E. M. Gold (1965). "Limiting Recursion". Journal of Symbolic Logic 30 (1): 28–48. doi:10.2307/2270580. JSTOR 2270580. , E. Mark Gold (1967). "Language identification in the limit". Information and Control 10 (5): 447–474. doi:10.1016/S0019-9958(67)91165-5. 
  23. ^ a b c Hilary Putnam (1965). "Trial and Error Predicates and the Solution to a Problem of Mostowksi". Journal of Symbolic Logic 30 (1): 49–57. doi:10.2307/2270581. JSTOR 2270581. 
  24. ^ a b L. K. Schubert (July 1974). "Iterated Limiting Recursion and the Program Minimization Problem". Journal of the ACM 21 (3): 436–445. doi:10.1145/321832.321841. 
  25. ^ Arnold Schönhage, "On the power of random access machines", in Proc. Intl. Colloquium on Automata, Languages, and Programming (ICALP), pages 520-529, 1979. Source of citation: Scott Aaronson, "NP-complete Problems and Physical Reality"[2] p. 12
  26. ^ Edith Spaan, Leen Torenvliet and Peter van Emde Boas (1989). "Nondeterminism, Fairness and a Fundamental Analogy". EATCS bulletin 37: 186–193. 
  27. ^ Hajnal Andréka, István Németi and Gergely Székely, Closed Timelike Curves in Relativistic Computation Parallel Process. Lett. 22, 1240010 (2012).[3]
  28. ^ Todd A. Brun, Computers with closed timelike curves can solve hard problems, Found.Phys.Lett. 16 (2003) 245-253.[4]
  29. ^ S. Aaronson and J. Watrous. Closed Timelike Curves Make Quantum and Classical Computing Equivalent [5]
  30. ^ Hogarth, M., 1992, 'Does General Relativity Allow an Observer to View an Eternity in a Finite Time?', Foundations of Physics Letters, 5, 173–181.
  31. ^ István Neméti; Hajnal Andréka (2006). "Can General Relativistic Computers Break the Turing Barrier?". Logical Approaches to Computational Barriers, Second Conference on Computability in Europe, CiE 2006, Swansea, UK, June 30-July 5, 2006. Proceedings. Lecture Notes in Computer Science 3988. Springer. doi:10.1007/11780342. 
  32. ^ Etesi, G., and Nemeti, I., 2002 'Non-Turing computations via Malament-Hogarth space-times', Int.J.Theor.Phys. 41 (2002) 341–370, Non-Turing Computations via Malament-Hogarth Space-Times:.
  33. ^ Earman, J. and Norton, J., 1993, 'Forever is a Day: Supertasks in Pitowsky and Malament-Hogarth Spacetimes', Philosophy of Science, 5, 22–42.
  34. ^ Joel David Hamkins and Andy Lewis, Infinite time Turing machines, Journal of Symbolic Logic, 65(2):567-604, 2000.[6]
  35. ^ a b Jan van Leeuwen; Jiří Wiedermann (September 2000). "On Algorithms and Interaction". MFCS '00: Proceedings of the 25th International Symposium on Mathematical Foundations of Computer Science. Springer-Verlag. 
  36. ^ Jürgen Schmidhuber (2000). "Algorithmic Theories of Everything". Sections in: Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit. International Journal of Foundations of Computer Science ():587-612 (). Section 6 in: the Speed Prior: A New Simplicity Measure Yielding Near-Optimal Computable Predictions. in J. Kivinen and R. H. Sloan, editors, Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT ), Sydney, Australia, Lecture Notes in Artificial Intelligence, pages 216--228. Springer, . 13 (4): 1–5. arXiv:quant-ph/0011122. 
  37. ^ a b J. Schmidhuber (2002). "Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit". International Journal of Foundations of Computer Science 13 (4): 587–612. doi:10.1142/S0129054102001291. 
  38. ^ There have been some claims to this effect; see Tien Kieu (2003). "Quantum Algorithm for the Hilbert's Tenth Problem". Int. J. Theor. Phys. 42 (7): 1461–1478. arXiv:quant-ph/0110136. doi:10.1023/A:1025780028846.  or M. Ziegler (2005). "Computational Power of Infinite Quantum Parallelism". International Journal of Theoretical Physics 44 (11): 2059–2071. doi:10.1007/s10773-005-8984-0.  and the ensuing literature. For a retort see Warren D. Smith. "Three counterexamples refuting Kieu's plan for "quantum adiabatic hypercomputation"; and some uncomputable quantum mechanical tasks". Applied Mathematics and Computation 178 (1): 184–193. doi:10.1016/j.amc.2005.09.078. .
  39. ^ Bernstein and Vazirani, Quantum complexity theory, SIAM Journal on Computing, 26(5):1411-1473, 1997. [7]
  40. ^ Santos, Eugene S. (1970). "Fuzzy Algorithms". Information and Control 17 (4): 326–339. doi:10.1016/S0019-9958(70)80032-8. 
  41. ^ Biacino, L.; Gerla, G. (2002). "Fuzzy logic, continuity and effectiveness". Archive for Mathematical Logic 41 (7): 643–667. doi:10.1007/s001530100128. ISSN 0933-5846. 
  42. ^ a b c Wiedermann, Jiří (2004). "Characterizing the super-Turing computing power and efficiency of classical fuzzy Turing machines". Theor. Comput. Sci. 317 (1–3): 61–69. doi:10.1016/j.tcs.2003.12.004. 
  43. ^ a b Dmytro Taranovsky (July 17, 2005). "Finitism and Hypercomputation". Retrieved Apr 26, 2011. 
  44. ^ Petrus H. Potgieter (July 2006). "Zeno machines and hypercomputation". Theoretical Computer Science 358 (1): 23–33. doi:10.1016/j.tcs.2005.11.040. 
  45. ^ Lenore Blum, Felipe Cucker, Michael Shub, and Stephen Smale. Complexity and Real Computation. ISBN 0-387-98281-7. 
  46. ^ P.D. Welch (10 Sep 2006). "The extent of computation in Malament-Hogarth spacetimes". British J. for Philosophy of Science 59: 659–674. arXiv:gr-qc/0609035.  Check date values in: |year= / |date= mismatch (help)
  47. ^ Hava Siegelmann; Eduardo Sontag (1994). "Analog Computation via Neural Networks". Theoretical Computer Science 131 (2): 331–360. doi:10.1016/0304-3975(94)90178-3. 
  48. ^ Joel David Hamkins; Andy Lewis (2000). "Infinite Time Turing machines". Journal of Symbolic Logic 65 (2): 567–604. doi:10.2307/2586556. 
  49. ^ Hintikka, Ja; Mutanen, A. (1998). "An Alternative Concept of Computability". Language, Truth, and Logic in Mathematics. Dordrecht. pp. 174–188. 
  50. ^ Darko Roglic (24 Jul 2007). "The universal evolutionary computer based on super-recursive algorithms of evolvability". arXiv:0708.2686 [cs.NE]. 
  51. ^ Eugene Eberbach (2002). "On expressiveness of evolutionary computation: is EC algorithmic?". Computational Intelligence, WCCI 1: 564–569. doi:10.1109/CEC.2002.1006988. 
  52. ^ Borodyanskii, Yu M; Burgin, M. S. (1994). "Operations and compositions in transrecursive operators". Cybernetics and Systems Analysis 30 (4): 473–478. doi:10.1007/BF02366556. 
  53. ^ Davis, Martin (2006). "Why there is no such discipline as hypercomputation". Applied Mathematics and Computation 178 (1): 4–7. doi:10.1016/j.amc.2005.09.066. 
  54. ^ Davis, Martin (2004). "The Myth of Hypercomputation". Alan Turing: Life and Legacy of a Great Thinker. Springer. 
  55. ^ Andrew Hodges. "The Professors and the Brainstorms". The Alan Turing Home Page. Retrieved 23 September 2011. 

Further reading[edit]

External links[edit]