Talk:Recursion theory

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Rewrite of 2006-6-27[edit]

I significantly expanded the previous article. It still needs a lot of work.

  • The relationship to proof theory needs to be made more explicit
  • The History and Scope section is not complete
  • The references should be expanded

Because this article doesn't need to contain any technical details, there is no reason why it should not be completely accessible to the general educated audience. CMummert 14:09, 27 June 2006 (UTC)

An opportunity?: Can we use small computing machines to demonstrate partial recursion, general recursion, etc? But first: What are all these different types of recursion?[edit]

wvbailey here: My questions:

(1) Can we concoct a way to describe "recursion" in terms of "simple machine" models such as Post-Turing, register or random-access machine models?

(2) partial versus general recursion versus primitive recursion (or whatever) -- a simple explanation? see below re Davis (1967)

Here's why I ask:

I ran into an interesting quote + derivations in a paper: Calvin C. Elgot and Abraham Robinson (1964), Random-Access Stored-Program machines, An Approach to Programming Languages, JACM Vol. 11, No. 4 (October, 1964) pp. 365-399.

This (for me) is a difficult paper. But I have a suspicion that, for some of us non-mathematicians, this recursion stuff wouldn't be so abstract if we could translate it into the action of little machines, or little models of humans-as-machines doing 'recursion' on ruled paper as if we were machines.

In fact Elgot-Robinson do this, but as I say their paper is not so easy. (In the following, "x" is a "memory register" (i.e. a place in space) in their Random Access Stored Program "RASP" computer-model, and ^2 references Davis (1958) Computability and Unsolvability). They state that:

 :The class of partial recursive functions is the smallest class of functions closed under composition and minimalization [^2, p. 41] containing
  1. "the constant function c0(x), c0(x) = 0
  2. "the successor function S, S(x) = x+1
  3. "the projection functions Uin, Uin(x0, ... n-1) = xi, 0 ≤ i < n,
  4. "addition,
  5. "proper subtraction (x ┴ y = 0 if x ≤ y, x ┴ y = x - y if x ≥ y)
  6. "multiplication" (pp. 379ff)

Question: Can "composition" and "minimalization" be explained in layman's terms? Examples?

They start with their greatly-reduced model of a "stored program machine" (i.e. program is together with the data in the common "register-space" like a universal Turing machine). The model includes:

  1. successor S(x): Add one to the contents of register x, then continue with the next instruction in sequence
  2. copy D(x, y), i.e. copy the contents of register x to register y & leave contents of x changed, then continue with the next instruction in sequence
  3. conditional transfer on equality C(x, y, z), IF the contents of register x equals the contents of register y THEN fetch the next instruction from register z, ELSE continue with the next instruction in sequence.

And from these they show how (in this order) to:

  1. create the projection function
  2. create the equality relation
  3. do addition of natural numbers
  4. do multiplication of natural numbers
  5. create the binary relation ≤
  6. do proper subtraction

I find this fascinating. That everything that is computable in all of mathematics can [hypothetically] be done by the above 6 arithmetic operations "with composition and minimization (whatever that is), and these can in turn be done by 3 machine instructions operating in/on a "register space" (a sequence of numbered boxes into which we can write numbers and then change them per a list of instructions). [Is this a true statement? Is there a simpler set rather than 6? But what then is 'full general recursion' -- is it it more "powerful" than partial recursion? etc]

(2) Minsky: Then there's an interesting tidbit in Minsky (1967). Instead of using RASP models he uses "register models". (He calls them "program machines" but I find his choice of name unfortunate. With the register models it is very difficult to include the program in the registers, for instance.). He claims almost the same three simple instructions (he uses inequality rather than equality and concots some other similar trios). Minksy then "demonstrates" that a set of the three instructions { "set a register's contents to 0" [0], "successor function" ['], "IF register's contents = 0 then jump else decrement" [-] } "is equivalent to" (p. 208) the "general-recursive" functions. To do this he first creates "copy" and "jump unless equal".

There's more: he also creates a set of instructions and shows that they are equivalent to the primitive recursive functions. This set includes a "repeat instruction" [RPT]. But he observes that one kind of [RPT] computes the "partial recursive functions" while the other creates the "full recursive functions". This has to do with whether or not the "repeat" command [RPT] includes or does not include itself 'in its range' <= huh? Does this have to do with self-modifying instructions?

So I ask: in layman's terms, what is the difference between all these kinds of recursion?

  1. Partial recursion
  2. Full general recursion
  3. Primitive recursion

And what is 'the minimization operator' that Elgot-Robinson and Minsky mention? Is this necessary for "full general recursion"?

Is there a way to use these machine-models to form "demonstrations" that are more 'concrete' -- less abstract, more constructive -- than pure "recursion theory" (which I find utterly opaque -- just masses of subscripts). For example:

I always think of "partial recursion" and "lambda calculus" as something done by hand on paper by mathematicians, not done by machines. Is this a correct statement?
Is there somewhere an example of multiplication done by recursion theory? Does this question even make sense?
All the machine models divide the machine into "instructions" contained a finite state machine (FSM) and "the machine memory". Sometimes, as in a Universal Turing machine and the RASP model, the data-memory also contains the instructions (but in the RAM and the register-machine models the instructions and data-memory are separate). In the UTM and RASP cases there's still a core FSM that interprets the "instructions-in-memory". Is there something equivalent to this FSM in recursion theory? (e.g. the mathematican's brain?) Just where do "the instructions" reside?

Examples would provide a nice tie-in with Register machine and Random access machine models, if a person can figure out how to do it so it isn't too abstract. As I look back, I realize that I could include some of the above in both articles, but I have to feel more confident that I know what I'm talking about with regards to recursion (I sure don't have that confidence now) [I really need to see an example of "multiply" that would help alot]. That would leave "lambda calculus" to be tackled (I know squat about that -- ditto for an example of "multiply") but then we would have the historical elements behind the Church-Turing thesis (more or less) covered.


This article is still pretty high-level. To make it more accessible to the general public, one thing that needs to be added is an explict explanation the the field recursion theory was named for recursively defined functions but this formalization is not often used in contemporary research. The origins of the name are well explored by Soare's 1996 paper [1] which is already a reference in the article. I recommend that article for a thorough overview of the history of the word recursive.
Moreover, the field of recursion theory has distanced itself from the use of specific models of computation. Some formal model is often used to define the computable functions, but this is not necessary as they can be defined using the arithmetical hierarchy or inductively defined as a class of functions in various ways. In any event, once the existence of an enumeration of the partial computable functions has been established, this enumeration is then used so that no additional reference to the model of computation is required (this goes back at least as far as Rogers' book). Thus a detailed description of various models of computation is not apropos here, although links are good. Also, the field of recursion theory includes topics such as hyperarithmetical theory that study models of computation that are not Turing computable at all.
My hope is to get this article to the point where it can be peer reviewed and, I hope, become a good article. To achieve that, I think that the article should focus on a nontechnical introduction to the field. CMummert 00:08, 18 September 2006 (UTC)
Well, there's kind of an elephant in the room. It was my fault that this article was forked from computability theory (computer science). I really think the articles should be re-merged, but with the existing material at computability theory (computer science) edited to make it substantially shorter. Some of the material is probably duplicative of existing articles, and if it's not, then the more detailed parts can go to one or more new articles. But there ought to be one survey article of the entire field, and it needs to move along a bit faster than the existing CT(CS) article. Simply merging the existing recursion theory into that article is not a good option, because the interesting stuff would not show up until too late in the article. --Trovatore 06:09, 18 September 2006 (UTC)
One problem I see with merging is that the CS article is important to computer scientists, so any merging and cutting of "their" ariticle would have to be done with great care. I don't have the energy to coordinate so much with the computer science project, so I thought the slight duplication between articles was acceptable. The CS side has Theory of computation as their main article, which says that "theory of computation" splits into computability theory and complexity theory. I am not sure whether this is standard terminology or not. Would we merge Recursion theory into computability theory or into theory of computation? CMummert 10:30, 18 September 2006 (UTC)

Table from Soare (1995)[edit]

Soare's survey (p. 29) is very useful for showing language used. But before using it "relative computability" needs to be defined. And the purpose of the last column is not clear. Whether something such as this should go here or in the computability theory or both, I am unsure. The table does provide a nice tie between the two theories.

Book Definition of computable Definition of relative computability Name used for function defined
Kleene 1952 general recursive general recursive recursive
Rogers 1967 Turing machines Turing machines recursive
Cutland 1980 register machines register machines computable
Lerman 1983 μ-recursive Turing machines recursive
Soare 1987 Turing machines Turing machines recursive
Odifreddi 1989 μ-recursive μ-recursive recursive
Shoenfield 1991 register machines register machines recursive

A proviso: Soare's meaning of "register machine" needs to be looked at more closely (i.e. I will need to check all his references). A register machine provided with "indirect addressing" (e.g. Melzak's 1961 holes-and-pebbles model, models of Cook and Reckhow 1973) is really a "random access machine" (RAM) -- and I have not seen a user-friendly way to convert a register machine into a RAM. The most primitive register machines (e.g. Minsky 1961, 1967) with only 1 or two tapes require fussy encoding (Godelizing, factorization of a huge number on the tape). RAMS, on the other hand, are very easy to use, as they resemble ultra-primitive RISC processors with Harvard architecture. wvbaileyWvbailey 14:21, 29 September 2006 (UTC)

There is no difference between computability theory and recursion theory; they are synonyms. They refer to the study of computable and relatively computable functions. While a model of computation may be used to define these functions, the specific model of computation is not particularly important. A contemporary research paper is unlikely to mention any specific model of computation whatsoever.
Relative computability. If a set A is computable by an oracle Turing machine using oracle B then A is relatively computable from B. Similar, and equivalent, definitions can be given in terms of other models of computation. Or in terms of the arithmetical hierarchy: A is relatively computable from B if A is definable.
When Soare says register machine he probably means a machine that has registers and no tapes. This is now standard terminology. I think you are attaching too much weight to old sources, such as Minsky, which use terminology in ways that are no longer standard. The most widely used contemporary introduction to computability theory is Soare's book. Soare is working on a revised version which will likely be the next golden standard reference.
I've taken more of a survey approach with the wiki-articles I'm slogging away on. Actually, through my ancient-history background searches I derived virtually the identical machine hierarchy as that in a survey article by van Emde Boas (1990) pp.3-66 with 141 references (!) in Handbook of Theoretical Computer Science, Volume A: Algorithms & Complexity QA76.H279 1990. (A very useful source comparing results of complexity theory measures between the various models (multitape-TM, register machine, random access machine (RAM), random access stored program machine (RASP), & pointer machine). Our definitions agree with one exception: he gathers the RAM and RASP into the general category "register machines" and I've broken them out/away from the classical "register machine": Thus a RAM is a register machine with additional indirection (van Emde Boas agrees, p. 23), RASP is a RAM with a program in its registers (van Emde Boas agrees, p. 24). Maybe this is where the terminology confusion comes in. It will be good to check Soare and update my Turing machine equivalents work. That whole page will get a rework at some point.
This is as much a reminder to myself as a clarification to others: the article register machine presently uses the names "counter machine" and "register machine" synonymously. Perhaps "register machine" should reserved for a class of machines including { counter machine, RAM, RASP machine, pointer machine }. The "counter machine" would remain as described in the "register machine article" -- a very primitive device indeed. Maybe Soare gives guidance too. But I need support from the literature. wvbaileyWvbailey 20:57, 29 September 2006 (UTC)
Are there are too many new books? Every author comes from his own angle/viewpoint with his own pet model (e.g. Davis-Sigal-Weyuker (1994) use a single-tape, multi-symbol Post-Turing machine model). Different model-usage seems to have to do with 'computing on strings' versus 'computing on numbers'. van Emde Boas actually gives a bunch of sub-names to RAM models on his page 24. As I recall I referenced these in the Turing machine equivalents (so many articles, so few brain cells...)
I hope Soare is doing good: a new presentation isn't necessarily a good one, or an improvement. Case in point; my thermodynamics text was written by the prof I had at the time and who was the dean of the school to boot -- Myron Tribus -- just a horrid text! Just horrid... I was in trouble, failing the tests. So I went deep into the stacks, got out the original thermo text by the original author (can't remember name) ca 1850's (a musty, smelling thing it was) ... and suddenly Carnot cycles etc. became clear as a bell. And there was Linville (chair of EE at Stanford at that time) who required everyone in the dept. to take his course we derisively called "Linville's Lumpenses" -- his "lumps" were never to be seen in the literature again (nigh 30 years anyway) and utterly useless. What a waste.
Poor students ... they need all the help they can get. In the article register machine I used extensively the same source Soare references (p. 17)-- Shepherdson-Sturgis (1963). It was suggested to me by a reader who wondered why S-S weren't mentioned in the article, why "register machines" are called "Minsky machines", for instance. So I did the ancient-history search. (An interesting read, actually about the collision of papers by Minsky, Lambek and Melzak and Shepherdson-Sturgis all in 1961 but S-S didn't appear until 1963. It turns out the Germans were there first (mid '50's, notably Rosza Péter.) Soare's set of register-machine instructions (in his paper) is just another version that is not always used -- the Peano axioms in disguise. But Soare augmented them with "transfer" -- a very useful instruction indeed.
Anyway until I see his treatment I don't know how "easy" it is for Soare to show that the register machine without "indirection" is Turing-equivalent. The clumsy 2-tape factorization method used by Minsky is why Malzek (1961) added indirection (indexing) to his model to make it into a RAM random access machine. Then a demo becomes very easy. (I'll bet Soare does the same thing ... but we shall see...). I will explore Soare next time I get to the library.
Thanks for your briefing on this stuff. (I would love to see papers by Rosza Péter , Horst Kaphengst, and H. Hemes referenced in Shepherdson-Sturgis but so far I haven't found them in the Dartmouth library, and I can't read German anyway. Any thoughts? Thanks ...) wvbaileyWvbailey 18:38, 29 September 2006 (UTC)
This article could use an explanation (very brief) of what Turing computability is, before the terminology section. I'll try to write it soon. CMummert 14:44, 29 September 2006 (UTC)
By the way, I just got a chacne to look at the textbook Automata and Computability by Kozen. It's organized as a series of lectures and seems very clear to me. It is focused on computer science, but I think that the part from Lecture 28 to the end gives a nice overview of recursion theory and its relationship to Godel's incompleteness theorem. In particular, it explains several models of computation including μ reccursive functions, Turing machines, and λ calculus. CMummert 17:09, 29 September 2006 (UTC)
When I said that Soare's Computably enumerable sets and degrees is the standard reference, I mean the standard reference that a research paper would cite as a reference on basic concepts. It is a graduate-level book; the improved presentation is really intended for clearly explaining topics that are generally considered technical, such as difficult priority arguments, rather than giving a leisurely introduction to the basic facts about Turing computability.
I have looked at very few undergraduate computability theory books; in my impression they tend to either
  • Be computer science oriented, with an emphasis on formal languages and automata. This is very important for computer scientists, because automata are used in the specification of compiler algorithms and network protocols, and you can't understand computer science without understanding them.
  • Or be general introductions to mathematical logic that include first-order logic, computability, Godel's theorem, and other such things. These can be good books, but they rarely go beyond the definition of a Turing degree.
In either case, the undergrad books usually don't focus enough on computable functions and sets, because of their level. The number of undergrad books that include priority arguments is vanishingly small, but priority arguments are the cornerstone of recursion theory and its defining method of proof.
So if you see any modern undergraduate introduction to recursion theory that doesn't fall into the above categories, please let me know! Cooper's book cited in the article is about the only example I know of. CMummert 20:28, 29 September 2006 (UTC)
RE Kozen: Bummer, you and I just had an edit conflict. I actually saw Kozen 1997 when I was doing a survey of Turing -tuples. And then I re-found it when I was puckered about the Finite state machine article's definition of "state". I cc'd lecture #3 pp 14ff. His definition agrees with Minsky (woops that ole guy again) and with the engineering definition (sort of, kind of) and more importantly with the dictionary defintion L. status: stationary, standing. This is my viewpoint too. I can't imagine defining a finite state machine without defining "state".wvbaileyWvbailey

A tiny quibble: re Generalizations of Turing computability[edit]

"Recursion theory includes the study of generalized notions of computability such as hyperarithmetical reducibility and α-recursion theory, as described by Sacks [1990]. Strong analogies have been found between these generalized computability notions, which are of a set-theoretic character, and the classical notion of Turing computability, which is of an arithmetical character. Both Turing computability and hyperarithmetical reducibility are important in the field of effective descriptive set theory. The even stronger notion of degrees of constructibility is studied in set theory."

Melzak (1961) had this to say about the above in bold-face:

"It might be remarked that [Turing's] A-machines as well as most alternatives, are primarily oriented toward the logical task of symbol manipulation rather than toward the arithmetical task of calculating... it is our object to describe a primitive device, to be called a Q-machine, which arrives at effective computability via arithmetic rather than via logic. Its three operations are keeing tally, comparing non-negative integers, and transferring."

Simultaneously Shepherdson-Sturgis (1961/3) picked up the torch to find a model that would "carry a step further the 'rapprochement' between the practical and theoretical aspects of computation suggested and started by Wang [1957]" (S-S p. 215)

Both Wang (1954/7) and Minsky (1961) placed Turing's A-machine in the camp of the symbol-manipulating logician but were trying to find an "basis for recursive function theory involving programs of only the simplest arithmetic operations" (Minsky 1961 p. 437) for the "applied mathematician and electrical engineers" (Wang 1954/7). Perhaps the word "Turing" should go away and the word "machine" be substituted, or "computability by machine" be used, etc. wvbaileyWvbailey 19:32, 26 October 2006 (UTC)

For my own use I have begun to use the word calculate for that which is done strictly by a person with paper and pencil or machine (i.e. mathematician writing a proof, or doing a calculation if by hand-- thus wouldn't recursion fall into that description?). But compute is reserved for the actions of a mechanism or by a person-as-mechanism (acting strictly as a mechanism: computer/computor) while in the process of "computing". Thus a person either using (i) a computer to do their accounts, or (ii) writing numbers and doing arithmetic by hand, or (iii) using a pocket "calculator", would calculating but the mechanism itself -- computer or "calculator" -- is computing. My dictionary agrees, sort of -- "calculate" comes from calx "chalk" implying "to write" , calculus "pebble" implying "to tally" and does not include "by use of a computer", but "to compute" (first usage 1616) is more ambiguous and allows "to calculate" as a synonym. I have observed at least in the olden days (e.g. Kleene, Minsky) a similar bifurcation of usage (but informally adhered to). Maybe in 2006 the usage has blurred the distinction? wvbaileyWvbailey 20:06, 26 October 2006 (UTC)

By arithmetical character I meant that Turing computability is closely linked to the structure of the natural numbers, to the point that the true theory of Peano arithmetic tells you everything there is to know about the computable and r.e. sets. The arithmetical character is that each completed computation can be represented as a finite object. In hyperarithmetical theory and α-recursion theory, it is no longer the case that each "generalized computation" can be represented by a finite object.
Since the previous paragraph is probably not easy for someone who doesn't already know recursion theory to understand, I'll try to clean up that section to make it more accessible.
I have never thought about keeping different meanings for calculate versus compute. Possibly this is because I accept the Church-Turing thesis that anything effectively calculable is Turing computable.
Please let me know any other thoughts you have about the article; I would like it to be as accessible as possible. CMummert 21:39, 26 October 2006 (UTC)

A scan of the article[edit]

The following words I flagged as could be linked (or not), as about 1/2 I would have had to look up to get the jist (more the X's the less I knew about the topic):

XRelative computability --> XXTuring reduction
XXdegree structure
Thue, XThue system, there is a link to semi-Thue system
Normal form
identity element
XXXXsimplicial complex
XXXhomeomorphic space
Diophantine equation
XXX "local strutural properties, global structural properties"
XXXalpha-recursion theory
Xfirst order formula =? first order arithmetic --> Peano axioms

In one place you used the abbreviation "r.e. Turing degrees" without defining what "r.e." stands for -- you do define it later in the article.

Some background/history might be useful (here or in another article with a recursion context):

The word recursion: First use (1616), comes from recur, to run again, L. re- + currere, (as in current). To repeat. Recursion: "RETURN". "the determination of a succession of operations on one or more preceeding elements according to a rule or formula involving a finite number of steps" (or not -- recursive: "...a procedure that can repeat itself indefinitely, or until a specified condition is met.")

Peano's definitions that followed the Peano axioms are recursive in nature and these came from Dedekind, correct? What possessed Dedekind and Peano to investigate this? Then what happened in the time between Peano-Dedekind, and "primitive recursive functions" (who first defined those?) and "recursive functions" of Gödel-Turing-Post-Herbrand-Church-Péter-Ackermann-Kleene etc? Wasn't it the antinomies/paradoxes of Russell ca 1900 and his crowd? and Cantor's method on top of it -- a kink in the Logicist/Finitist program of Hilbert? So was it Hilbert's problems that focused their work on "Foundations" and "Metamathematics"? Did Brouwer and his band of merry men play a role? This brings us up to ca 1950. We have Kleene's book (1952) (+ his professorship) which seems to me to be the magnus opus that set off Wang, Davis, Minsky, Melzak, Lambek, and ?? and all the host that was to split off to became the "theoretical computer scientists" (and some Russians too, isolated by the cold war). That left the die-hard recursion theorists. I loose the "recursion theorist" thread from 1960 on. Hilbert's tenth is solved. Gödel was on his last legs, Chaitin was a student. From 1960 point on -- who were the players? Who, or what "schools of thought", are the major players now? I know only the name Chaitin. You mentioned a few others... You do hint at "some of the big problems...".

Maybe you know of a good (historically-inclined) book -- "The history of recursion theory"?

On a more humorous take on the matter: the random wiki-student who bumps into "recursion theory" might ask: "When I grow up (choose a college, pick a field of study, declare my major, become more theoretical than computer science) I want to be a mathematician -- why would I want to be a recursion theorist?" I think the answer is: "Because you get to work on some really cool unsolved problems." The kid asks: "Yeah, such as? Explain..." My dad was an actuary. "What the hell is an 'actuary'?" I used to wonder. In 3rd grade I told people he sold insurance. Then when I got in high school he showed me his books of mortality tables: he definitely did not ignite in his son the fiery actuarial spark. How do you ignite the fire in the mathematics student? Hope some of this helps. wvbaileyWvbailey 03:19, 27 October 2006 (UTC)

Those comments are very helpful. I addressed some of them, and others I will have to think about for a while before I can find a way to write them. Some of the things you pointed out I wouldn't have noticed myself (I forget that the terminology is not familiar to the average person). CMummert 14:27, 27 October 2006 (UTC)

Glad to be of help. Another thought came afterwards as I was lying in bed -- would you consider Recursion theory to be one of (if not truly the only, or one of just a few of) the true "foundations" of mathematics? If so, or there's a tie-in to "foundations", this might ignite a few fiery sparks. Just a thought.

The fact that recursion theory deals only with the counting/couning numbers + 0 (positive integers) (is this correct?), that it is fundamentally arithmetic in that sense, is what attracts me to it now that I know more about it. (I thought this point was very good in the article intro). What also attracts me is that recursion theory (in the broadest sense -- algorithm-by-machine) is (I believe coming) very close to a problem around philosophy of mind, in particular where exactly are "the numbers"? For example (from the "foundations" talk page): Searle's (of Chinese Room fame) assertion that a machine is actually just a mindless, syntactic (symbol-manipulating) mechanism, that the meaning of the numbers (for some unexplained reason that I suspect has to do with enumeration i.e. "animal logic") resides only inside a human mind (and maybe the minds of crows!), hence here is where the numbers really are (etc. etc). (I am not sure what I believe, but anyway...) I don't recommend that this goes in the article, but it does suggest a powerful angle -- that recursion theory is working at the base of the edifice; Gödel-Turing-Post-Church et. al. made the edifice tilt a bit by mining away at its foundation. And more minings are underway (e.g. Chaitin) but simultaneously better foundation-reconstruction is in progress (examples?)... being a recursion theorist is very cool when you think about it that way. (I have to go and deal with the chipmunk that is raiding my bird-feeder...)wvbaileyWvbailey 16:10, 27 October 2006 (UTC)

Yes, Turing computability only deals with the nonnegative integers, and sets of them. Higher recursion theory uses other things, but is much less well known and is not as foundationally important. I usually identify finite binary sequences with natural numbers (both of them require understanding the word "finite" to define) and so I don't distinguish between manipulating finite strings of symbols versus manipulating natural numbers.
I would say that recursion theory is "foundational" mostly in its definition of Turing computability as the correct formalization of effective calculuation. This definition is almost universally regarded to be the right one. It is required for proofs that certain problems are not solvable by computer programs. It also makes possible the definition of random reals numbers originally given by Martin-Lof. I don't want to speculate too much on foundations in this article because I'm not trained in history or philosphy and because my random speculations would probably be original research from the WP point of view; I have already pushed it as far as I think I can. CMummert 16:27, 27 October 2006 (UTC)

RE Turing computability, Gandy and Soare: after reading these I am now wondering about the question of computer/computor versus computation with a "machine-in-the-broadest-sense". Can a "machine-in-the-broadest-sense" calculate more than a Turing-machine-computer/computor? There is still this niggling little thing bothering me about "analog computation": suppose all the computations in the "analog box" were analog in nature(but not necessarily linear, e.g. the squashing/sigmoid function is my favorite), converted only at the "end" to discrete "numbers"? (If it were just up to me I would break Post-Turing Thesis into three, in the manner of Kleene (1952), pull in Gandy-Soare etc.)

I hear you about the original research problem; I'm confronting the same issue in an article as we speak: "Is it, or is it not, original research?" I see you have been busy -- the article Algorithmically random sequence -- I don't know much about them but find random number generation fascinating. (RE chipmunks: I put a bunch of seeds out for the chipmunk so it leaves the feeder alone. It is busily filling its cheeks with seeds and scurrying back to its little burrow under our front porch. It will be hibernating soon. I'm sure you were just dying to hear the follow-up to the chipmunk problem.) wvbaileyWvbailey 18:56, 27 October 2006 (UTC)

Adding Topics of Recursion Theory[edit]

I have added some topics of recursion theory but I think the list is still incomplete. So whenever you find something important topic to be added, please do so. The subsections should not be too long, but the most important things should be there. For example arithmetical hierarchy and index sets are still missing. Will do it when I have more time. Frank Stephan 10:33, 25 February 2007 (UTC)Frank Stephan

Should this be renamed "Computability Theory"?[edit]

Has this been discussed before? There's a section about the name of the subject, but it seems that it is becoming increasingly referred to as "computability theory" rather than as "recursion theory". At what point should Wikipedia rename the article, and has that point already been passed? Althai 17:22, 4 March 2007 (UTC)

The name has not been discussed in detail, although there was some discussion at Talk:Computability theory. Currently, there is a strange split in which computability theory links to two pages: this one on the intersection of the field with mathematical logic and another on the intersection of the field with computer science. Although these "ought to" be merged, the articles are tended by mostly disjoint sets of editors and so a merge would require a lot of discussion to pull off.
As I see it, the titles of the articles are not very important provided that all reasonable alternative titles either redirect or give an explicit link to the correct article. I think that is the case with this article. Discussing the title too much is likely to lead to a long discussion that ends with a lack of consensus, which is not worth the wasted time. Following the bike shed principle, I think that it is more useful to discuss the actual contents of the article rather than the title.
By the way, I noticed that you have created some useful articles on other computability theory topics. It's good to have one more knowledgable person in the field here. CMummert · talk 20:51, 4 March 2007 (UTC)
If by "useful articles" you mean stubs... Althai 06:03, 5 March 2007 (UTC)
I meant articles on important topics that were previously not covered at all. Look at the first version of this article. CMummert · talk 13:26, 5 March 2007 (UTC)
Many of the practitioners in the field still refer to it as recursion theory. Soare is trying hard to change this but not everyone agrees. I'm happy with the page going under either term so long as there is a redirect but they should be merged.Logicnazi 02:15, 30 August 2007 (UTC)

Merge with computability theory[edit]

This page needs to be somehow merged with the page on computability theory. Don't care what goes where but the two pages give the impression these are two distinct subjects rather than just different focuses on the same subject. Maybe we should have a brief overview and then talk about the two directions studies have taken (oracular computation vs. deciding what other systems are turing complete) but the way things are now is wrong. I know this has been mentioned before but any effort to fix things seems to have died. Logicnazi 02:15, 30 August 2007 (UTC)

You mean Computability theory (computer science), since Computability theory is a dab page. I completely agree they should be merged, but have no energy to do it. You may want to inquire at Wikipedia:WikiProject Computer Science, since the other page is the CS page while this is the math page. — Carl (CBM · talk) 02:24, 30 August 2007 (UTC)

Aren't these wholly different subjects? Recursion theory deals with various fairly infinite & non-computable models of computation, which are interesting for pure mathematicians, but don't have applications. Computability theory (computer science) deals with questions of what algorithms exist. Computability theory isn't necessarily applicable either, given that maybe algorithms have insane computational complexity, but they are far far closer to applications than infinitary models of computation debated by recursion theorists. —Preceding unsigned comment added by (talk) 16:57, 26 July 2009 (UTC)

Computable and uncomputable sets[edit]

This article is the top-level article for all of recursion theory. As such, it needs to be a summary, pointing to other articles for details, so that a reader can read through this entire article and get a sense of the field. To that end, I have pruned some recent additions to first section. I did keep some f the expanded commentary on the correctness of the definition of computable functions, which was previously just a single sentence.

Quotations need to be kept to a minimum; multiple long quotations in a single section are generally disfavored in favor of summary. In addition to stylistic issues, each quotation we include is an instance of fair use of copyrighted material, and our use of such material must be minimal to meet the requirements of copyright law.

This paragraph belongs in the article on the Church-Turing thesis

Indeed, Gödel would strengthen his position over the next 20 years. From day one he had doubted his own recursion theories (cf Gödel's communication to Davis 1952:40)-- he turned out to be correct about his primitive recursion and later augmented it with a suggestion by Herbrand (Gödel 1934 in Davis 1965:41ff). He would later (1963) say that "due to A.M. Turing's work, a precise and unquestionably adequate definite of the general concept of formal system can now be given" (Gödel 1965 in Davis 1965:72), adding in footnote 70 (1963) that "In my opinion the term "formal system" or "formalism" should never be used for anything but this notion" (Gödel 1963 in van Heijenoort 1967:616). In a footnote * (1965) he stated that "As for previous equivalent definitions of computability, which, however, are much less suitable for our purpose, see A. Church ... (1936)" (Gödel in Davis 1965:72). This last remark seems to include both Church's λ-calculus and his own recursion theory, both of which were used by Church (1936). Soare (1996) gives a complete history of the combined theses.

The following goes into more detail than is needed here.

The first undecidable propositions were these:
Gödel 1931: Given any PROOF (sequence of formulas and axioms) in a formal system k (broad enough to express the notions PROOF and PROVABLE within the system), this PROOF is PROVABLE in system k.
I will work on this more to see if I've got it right, or if there's a better expression of it.
Church 1936: Given any well-formed formula F in the λ-calculus, this formula F has a "normal form" in the λ-calculus (i.e. is a valid formula).
Turing 1936-7: Given any computational machine M, this computational machine D can determine if M is "circular" (enters and remains stuck in an infinite loop) or "circle-free." (Proof 1)
Modern rephrasing of proof 1: Given any computational machine, M, this computational machine H can determine if M will ever halt. (Halting problem)
Turing 1936-7: Given any computational machine M, this computational machine E can determine if M will ever print a 0. [Proof 2]
Turing 1936-7: Given any computational machine M and a formula Un(M) in the formal system k, this method A(M) can determine if Un(M) is provable in k. [Entschiedungproblem Proof 3]
I may not have expressed this last one quite right.

In particular, the rephrasing of Godel's result isn't clear to me, and the "circle free" terminology of Turing is idiosyncratic, not used in any further publications, and only of interest as a historical sidenote.

— Carl (CBM · talk) 13:02, 3 September 2007 (UTC)

Feedback is always good. I've fiddled with the above.
As you suggest I'll probably move the first block of stuff over to the Church-Turing thesis and probably the latter stuff (in some form or other) over to Entscheidungsproblem. Both articles, in my opinion (as you've probably guessed), need work. The C-T article in particular seems to be bugging people so I am adding a huge new "history" sub-article, and Softest and I really worked over the lead; now it is at least accurate and sourced, altho admittedly wordy and difficult to read. Probably the lead can be pruned and some of the stuff moved into a later section.
About the stylistic issues: this past week the David Hilbert article got majorly panned and delisted from GA. My conclusion there is to first in-line reference/cite everything that can be referenced, virtually line by line, and then afterwards go back and merely hide the in-line references/citations and work out the "compelling prose". If I was "king of wikipedia", I would give wikipedia a simple mechanism to "show" the hidden citations and sources for students and scholars etc. At a party the other night I was in a discussion of wikipedia, and the first thing the guy I was talking to said was: "How do you know anything you read there is correct?" I couldn't begin answer him, my feeble attemps led to eye-rolling etc. Way I figure this is: wikipedia articles must go through an infancy stage that involves heavy use of in-line citations and thereafter some mechanism should be available to indicate exactly, and accurately, where the info came from.
About use of block quotes: I understand that "Fair use" implies quotations for purposes of review, but I am unclear about use for wikipedia. How "big" a block quote is "too big"? How many block quotes from one source are "too many"? For example, the poor Hilbert article has only one good source (Reid). What to do? My style has always been to use quotes where possible; in my college days I saw folks hung for poor paraphrasing practices. I dunno how to approach this except keep doing what I believe is right from the scholarly, as opposed to legalistic, point-of-view. wvbaileyWvbailey 18:04, 3 September 2007 (UTC)
Avoiding excess quotations is the right thing to do from both the scholarly and the legal point of view. No academic paper or text in recursion theory would include a large amount of quoted material. Especially in an encyclopedia, the goal is to paraphrase and to present a summary of the contemporary viewpoint. We don't want articles that consist merely of a large number of quotations; we do want articles that explain what is going on and use quotations when they are particularly useful to drive home a particular point.
I am particularly interested in having strong, correct prose and strong, correct content. Promoting articles to GA or FA status is a much less important goal. One of the central problems with GA and FA is their focus on form over content, which causes many of recognized articles read like freshman research papers rather than encyclopedia articles.
As to trusting that content is correct, there isn't much that the average reader can do except to trust that it's mostly correct but not accept it as completely authoritative. In any case inline citations wouldn't help that in mathematics articles (e.g. David Hilbert is a biography; computable function is a mathematics article). In many cases the WP articles are much more accessible than the texts used as references, and for that reason the WP articles are a unique resource for people interested in a glimpse of advanced mathematics without first obtaining an advanced degree. — Carl (CBM · talk) 18:40, 3 September 2007 (UTC)

Attempt to merge[edit]

For anyone keeping this article on their watchlist: there is currently an attempt at merging a number of articles on (non-)recursive/(un)decidable/(un)computable sets/languages/decision problems into a single, thorough article. The experiment currently sits at Recursive languages and sets. Comments are very welcome. Pichpich (talk) 00:42, 21 February 2008 (UTC)

A paradoxical situation[edit]

To keep the discussion in one place, I have removed this comment, which is a duplicate of the discussion at Talk:Algorithm#A_paradoxical_situation. Please go to that page if you wish to discuss the matter. — Carl (CBM · talk) 20:41, 19 March 2008 (UTC)

Who's grant denial precipitated the name change?[edit]

I've heard from numerous sources that recursion theory was renamed to computability theory because one of the big shots was denied an NSF grant. Who was it? Soar himself? I've also heard a really nice quote from this guy on the matter, but I can't reproduce it verbatim. Can anyone give it accurately? Thanks! —Preceding unsigned comment added by (talk) 17:00, 26 July 2009 (UTC)

Reorganize the Computability articles[edit]

(Up to the separating line this discussion has been copied from Wikiproject Math to keep it all here.)

We have these "general" articles:

We also have much better articles on the important topics, recursion theory, lambda calculus, Turing machine, and random access machine; we also have a decent overview article on register machines in general.

The way I see it computability should be is a high-level intro to the often encountered equivalent models of computation: recursion theory, lambda calculus, Turing machine, and random access machine. This is along the along the outline of S. Barry Cooper's Computability Theory (see pp. 7-8), which despite being written by mathematician was quite satisfying for me as a computer scientist (despite the many misprints, and his insistence on calling RAMs URMs, but that's another matter).

(I will cross-post to the CS wikiproject to attract participants from there too, but that project is nearly dead.)

Thoughts? Pcap ping 11:22, 13 August 2009 (UTC)

I don't know why there was originally a split between computability theory (computer science) and recursion theory. However, I know why I have never tried to merge them, which is that it seemed too difficult to write a single article on "computability" that gave the proper focus to both areas.
At some point I worked on getting the recursion theory article up to shape. This article does not describe a model of computation; "recursion theory" is a field of mathematical logic concerning computability and definability. The model of computation is described at μ-recursive function. The article recursion theory gives a pretty reasonable overview of that field of logic.
It would be possible to merge computability theory (computer science) into the article recursion theory, but I have never been convinced that many computer scientists would be happy with the result. I could work on that next week if there is a desire for it among the computer scientists.
We should discuss that at Talk:Recursion theory. — Carl (CBM · talk) 11:43, 13 August 2009 (UTC)

Okay, the issue here appears to parallel that of general relativity. We need an article that deals with the stuff at an introductory level, just like introduction to general relativity does. At a first approximation Computability theory (computer science) should simply be renamed to introduction to computability theory, and recursion theory (this article) should be renamed to computability theory (although computability alone would get more google juice). The introductory article still needs more expansion to mention other important models besides Turing machines, as I explained above.
(I'm noting in passing that lambda-recursive function simply redirects to lambda calculus, although we do have different articles on group theory and group (mathematics) as a parallel for recursion theory versus μ-recursive function.) Pcap ping 12:42, 13 August 2009 (UTC)
I'd have thought the Computability article could be made an introduction and overview instead. Compared to the Computability theory (computer science) article it would give a bit more history, enough to show the timeline of the various ideas and link the mathematical and computing sides and be perhaps have a bit more width and less depth than Pcap seemed to want. I really don't see that the scope of the Recursion theory article need change much at all, in fact I think it could be restricted a bit. The business about definibility and proof and suchlike stuck at its end is more suitable for an overview on computability. I'm not altogether sure what the purpose of the Computability theory (computer science) should be or whether it should exist at all, I'd start by copying much of its content to Computability. Dmcq (talk) 13:49, 13 August 2009 (UTC)
By, the way Computable function repeats a lot of stuff again. Dmcq (talk) 13:56, 13 August 2009 (UTC)
That sounds good. Basically computability = introduction to computability theory and this article (recursion theory) could well be renamed to computability theory as it covers many advanced/research topics that are of little or no interest to the average undergrad in comp. sci. Pcap ping 14:03, 13 August 2009 (UTC)
Also note that we have an "experiment and see if we can merge this stuff" article at Recursive languages and sets. (too many cooks I guess) Pcap ping 15:05, 13 August 2009 (UTC)
Yes, just move this to computability theory. The stuff that's in computability theory (computer science) (which isn't really about computability theory) can go to computability or theory of computation. Recursive languages and sets (+recursively enumerable languages and sets?) can be useful to bridge the jargon-gap between computer science and mathematics, so maybe link that in the lead? —Ruud 16:41, 13 August 2009 (UTC)
By the way, User:Martin Davis (Martin Davis) complained about the dab page too at Talk:Computability theory. Pcap ping 17:16, 13 August 2009 (UTC)

(←) I don't object to moving this page to computability theory, but there are a few important points that come to my mind.

  1. Recursion theory in mathematical logic is fundamentally about the study of non-computable functions and sets.
    1. For this reason, the difference between recursion theory and μ-recursive function is not the same as the difference between group theory and group (mathematics).
    2. If we want the scope of this article to also include computer science, we need to add some more sections on subrecursive computation, complexity, etc., as these aspects of computability are not covered in this article at present.
  2. This article does include an introductory overview of the basic topics of computability in sections 1 and 2. Since this is a "top of the pyramid" article, it should not go into detail about various models of computation, but just link to them and explain the general concept, as is presently done.

— Carl (CBM · talk) 00:59, 14 August 2009 (UTC)

I realize this topic is dear to you, but let's not rehash stuff that (other) researchers in this area could not agree upon. Yes, the cool stuff in this field is the relative uncomputability cf. arithmetical hierarchy etc., but the article already starts with "Recursion theory, also called computability theory,..." I don't see how moving it to computability theory requires a fundamental change in contents. For all practical purposes (asymptotic) complexity theory is seldom studied by mathematicians (as far as I know), so probably nobody wants to read about P=?NP in this article, regardless of whether computability theory should in theory (sic) also contain complexity theory. Just point to the right article in one sentence, as it's done in complexity theory. (By the way, complexity theorist have pretty much given up on Wikipedia since they have their own zoo.) Pcap ping 03:23, 14 August 2009 (UTC)
The argument you are linking to is about whether to use the literal term "recursion theory" or "computability theory" (and is explained in the article already). There isn't any real disagreement that the topic of interest in mathematical logic is noncomputability. Rogers' Erlangen argument says as much, and was published far before the terminology dispute (and is described in the article).
But computer scientists are much more interested in computable things, and so if this article is meant to cover both the mathematical logic and computer science areas, then this article has to actually discuss the things that are of interest to computer scientists. We would need to add some brief sections on automata theory, complexity, formal semantics, and other topics of interest to the CS community. This difference in focus is most likely why the two articles were separated in the first place. I can add these sections, but I need some time to do it, and am traveling over this weekend.
The complexity zoo is a fine project, but its existence can't be used to argue that we can shortchange complexity here. However, if there are few complexity theorists here, that might explain why computability theory (computer science) has languished for so long. — Carl (CBM · talk) 10:59, 14 August 2009 (UTC)

Concrete proposal[edit]

Unless someone objects, I will make the following move in a couple days:

Then I will begin working on enlarging the scope of this article to include computer science as well as mathematical logic.

Independent of that, we need to figure out what to do with computability. I would suggest writing it as a summary of different models of computation, the concept of algorithm, etc. Perhaps we should discuss that at talk:computability in case someone else has that on their watchlist. — Carl (CBM · talk) 10:59, 14 August 2009 (UTC)

Sounds good to me. Pcap ping 11:18, 14 August 2009 (UTC)
Good idea. Probably the current contents of Computability theory could be moved to Computability and Computable redirected to Computability without any bother so as to get a start on the business. In future I think Computability should not be a disambiguation page and in fact just moving Computability theory (computer science) to it would provide a reasonable basis for developing a top level introductory article. So I guess the talk page on Computability theory (computer science) should also be warned about this discussion - I'll go and post something there. Dmcq (talk) 11:21, 14 August 2009 (UTC)
I put a pointer to these discussions at Talk:Computability_theory_(computer_science)#Reorganize_the_Computability_articles I guess the discussions need to be centralised until such time as the articles are actually moved/split/redirected or whatever. Dmcq (talk) 11:35, 14 August 2009 (UTC)

OK, amended proposal:

  1. move recursion theorycomputability theory, leaving a redirect
  2. move computability theory (computer science)computability
  3. create a redirect computability theory (computer science)computability theory
  4. redirect computablecomputability

Then the plan is to edit computability theory to reflect both math and CS, and to edit computability to be an overview of computability. — Carl (CBM · talk) 11:37, 14 August 2009 (UTC)

Although I didn't say it so clearly, steps 1 & 2 is what I proposed too, so I can only approve. By the way, the article on computability should say somewhere at the top that it's an introductory article so people don't miss computability theory and start adding esotherics topics to the introductory article.
I don't really have an opinion on step 3, but I suggest someone checks inbound links there because computability theory (computer science) might be linked in order to explain simple concepts, in which case it might be less work to just make it point to computability (the introductory article). "computability theory (computer science)" is an unlikely search phrase.
Step 4 is a good idea too because Computability theory as it stands is a rather unusual "double dab" page: computable redirects to it and the term is also dabbed there as an adjective. There's no point in having that since the connection between computable sets, computable functions etc. is best explained in an article rather than give the impression they might be unrelated by listing them in a dab page.
Pcap ping 12:17, 14 August 2009 (UTC)
P.S.: Don't add a lot of asymptotic complexity stuff to computability theory. The Outline of computer science#Theory of computation, which roughly follows the ACM classification indicates that most if not all computer scientists don't expect that asymptotic complexity be discussed under "computability theory". Parenthetically, the ACM classification is actually a DAG, not a tree, but the labeled descriptors are subtree. "Computability theory" falls under both F.1.1 (Models of Computation) and F.4.1 (Mathematical Logic). On the other hand (asymptotic) complexity falls under F.1.3 (Complexity Measures and Classes) in the abstract setting, and under F.2 (ANALYSIS OF ALGORITHMS AND PROBLEM COMPLEXITY) in an applied setting. My point here is that computer scientists don't see (officially or otherwise) complexity theory as a subtheory of computability theory. This is a point to keep in mind in updating the articles. Pcap ping 12:44, 14 August 2009 (UTC)
P.P.S. Theory of computation#Computability theory explains the math/CS philosophical differences in approaching computability theory quite well, despite the fact somebody wants a reference for it... Pcap ping 12:57, 14 August 2009 (UTC)
Seems good to me except as Pcap says step 3 should probably redirect to the Computability article which is what an automatic move does anyway. It has a lot of links but my guess is most of them are there because people didn't want to end up on the disambiguation page. Dmcq (talk) 14:37, 14 August 2009 (UTC)

I wouldn't expand this article too much. When computer scientists think about computability, they will usually think about the broader field of theory of computation. This would encompass automata theory (all the various computational models), computability theory (why are certain problems like the halting problem and the word problem non-computable?), computational complexity theory, but also model for concurrent computation etc. Most of this probably has little place in this article, which should indeed mostly focus on non-computability. —Ruud 20:17, 14 August 2009 (UTC)

The theory of computation is just an overview article splitting into computability and complexity. This discussion is about the the first part of the next level down as far as that is concerned i.e. computability. Personally I doubt most computer scientists confuse computability with complexity. Dmcq (talk) 20:58, 14 August 2009 (UTC)
That article currently isn't very representative of the field. What we call computability theory/recursion theory is only a subfield of the whole theory of computation, covering mainly the question "What is computable and what not?". There are, of course, many more questions one could ask related to computability ("What languages can certain models of simple computational devices recognize?", "How many elementary steps are required to solve some computational problem?", "What can be reliably computed concurrently when communication between nodes is unreliable?") These questions probably shouldn't be addressed in this article. —Ruud 21:12, 14 August 2009 (UTC)
Except for the problems of reliability I'd have thought the top level computability article should probably reference those in a general sense though they wouldn't be of interest in the mathematical computability theory. Along with things like quantum computers and oracles and suchlike. The main purpose would be to let someone get a quick management summary as it were and then go to the relevant article. Complexity though is a different matter. And I only included quantum computers because of peoples misconceptions about their capabilities - they are much more interesting from the point of view of complexity theory. That aspect of reliability would be better under something like information theory I'd have thought. Dmcq (talk) 22:04, 14 August 2009 (UTC)
The clarify, I was speaking about including those subjects in this article (recursion theory). They should indeed be mentioned in a higher level article on computability. My preference would be to have that higher level overview article at theory of computation, which is commonly used in textbook titles on that subject, and a disambiguation(-like) page at computability itself. —Ruud 22:24, 14 August 2009 (UTC)
Concurrent computation would cover subjects like pi-calculus, petri nets, Byzantine generals' problem and unbounded nondeterminism. —Ruud 22:29, 14 August 2009 (UTC)
The current articles theory of computation and computability theory (computer science) should probably be merged into a single article. —Ruud 22:24, 14 August 2009 (UTC)
The idea was to have an introductory article rather than stick everything including the kitchen sink in. However assuming you are right that a significant percentage of people wanting the theory of computation would think of computability it sounds therefore that it would be preferable to include a short bit about complexity in the computability article and then just refer to the appropriate place for more. That would then mean there was the introductory article Computability which then referring to Computability theory for the more mathematically oriented and Theory of computation for the computer science oriented and perhaps a number of other areas too for slightly more specialist subjects. Dmcq (talk) 22:50, 14 August 2009 (UTC)
I'm not sure if I fully understood you, so I may be repeating you or myself, but I think that "a significant percentage of people (especially computer scientist, not so sure about mathematical logician) thinking of computability would want the theory of computation article", where "theory of computability" refers just the collection {automata theory, computability theory, computational complexity theory, theories of concurrent computation, theory of quantum computation, ...} making it the perfect place for an introductory article. It could also be placed at computability itself, but that would not be my preference. —Ruud 23:05, 14 August 2009 (UTC)
Computability really would have to either be a disambiguation page or an introductory page. I'm not sure where you get this feeling that people confuse computability with complexity but I'm pretty certain that anyone who makes such a mistake doesn't want anything very complicated. There's no intention of doing anything whatever to the Theory of computation article, it can develop however it should. All I was talking about is making certain people get directed to the right place from the introductory computability article if what you say is fairly common. I don't think people making mistakes is a good reason for putting the theory of computation into a computability article. Dmcq (talk) 00:09, 15 August 2009 (UTC)
I still don't think I fully understand you nor whether you fully understood me, sorry :( I do not believe I anywhere implied people confuse computability and with complexity. (Just to be sure, do you use those terms as shorthand for "computability theory" and "computational complexity theory"?) —Ruud 00:41, 15 August 2009 (UTC)
Sorry I meant confuse computability with theory of computation. But most of what isn't in computability would be under complexity and I suppose a user has an idea in mind of what they are looking for. And yes I mean what would be under computability theory and computational complexity theory. Dmcq (talk) 01:06, 15 August 2009 (UTC)
That clarifies things. Assuming you're right, and I think you probably are, a user looking for "computability" is probably looking for that term as defined in computability theory. In that case a straight redirect from computability to computability theory might be preferable. Someone wanting a more general overview is probably going to look under the term "computation". —Ruud 20:02, 15 August 2009 (UTC)

So, what's the status of this? As far as I can tell there's consensus to proceed with Carl's proposal (with minor suggestions about content updates, but those can always be discussed later/separately). Pcap ping 19:37, 21 August 2009 (UTC)

I'm planning to do it Saturday. It never hurts to give people time to comment; some people only check their watchlist occasionally. — Carl (CBM · talk) 20:33, 21 August 2009 (UTC)


I implemented the moves and redirects from my post dated "11:37, 14 August 2009" above. I had to do some creative history preservation; see the top of Talk:Computability theory.

I think that the main flaw that this has created is that the article that is now computability is not very good at giving an overview of computability. So I think this is the best area to focus on first. There is also the issue of making computability theory a bit broader in scope, but several comments above say this is not such a serious issue. — Carl (CBM · talk) 15:23, 22 August 2009 (UTC)