# Talk:Halting problem/Archive 3

## Proving a function halts

While Turing's proof shows that there can be no general method or algorithm to determine whether algorithms halt, individual instances of that problem may very well be susceptible to attack. Given a specific algorithm, one can often show that it must halt for any input, and in fact computer scientists often do just that as part of a correctness proof. But each proof has to be developed specifically for the algorithm at hand; there is no mechanical, general way to determine whether algorithms on a Turing machine halt. However, there are some heuristics that can be used in an automated fashion to attempt to construct a proof, which succeed frequently on typical programs. This field of research is known as automated termination analysis.

Don't 'heuristics that can be used in an automated fashion' count as a mechanical, general way to solve some instances of the halting problem? It seems to me that it's perfectly possible. —Preceding unsigned comment added by Neodymion (talkcontribs) 21:31, 9 September 2007 (UTC)

But a program can be written to generate all proofs (and thus all theorems) within a given axiom system, which implies that there are programs which cannot be proven to halt or not halt. If this was not the case, the proof-generator would eventually find a proof either of "P halts" or "P does not halt", and we could use it to prove the halting problem. The article does not make this clear. BTW, do we have an article on this "proving problem" (or whatever it's called)? --Taejo|대조 10:36, 13 June 2007 (UTC)

I think the article you have in mind is the one on Godel's incompleteness theorem. Yes, it might be worth briefly mentioning that here, although probably not in great detail. — Carl (CBM · talk) 13:32, 13 June 2007 (UTC)

-- I have a proof (one that can be presented in under an hour) that there exists a set of laws of physics in which if a Turing-like machine were built (and in this case the laws of physics do not exclude the possiblity of building one!) it could solve the halting problem of any given turning machine & input combination. The target machine differs from an ordinary turing machine in that it is consumed in the execution of the algorithm and so cannot be reused. --Joshua —The preceding unsigned comment was added by 12.117.131.10 (talkcontribs) 19:47, 2 July 2007 (UTC)

It's a lot of fun to come up with models like that. There are some models at hypercomputation; is yours similar to the ones listed there? There is no problem with the theory of Turing machines if some machine other than a Turing machine is able to decide whether Turing machines halt, although nobody has every produced a working example of a machine that can do that... — Carl (CBM · talk) 21:35, 9 September 2007 (UTC)

## Humans and the Halting Problem

I find the discussion of humans and the halting problem kind of pointless. The proof that it can't be solved is already presented, so why even discuss if humans can solve it? —Preceding unsigned comment added by 131.107.0.73 (talk) 20:20, 24 October 2007 (UTC)

Well, the claims of Roger Penrose that humans can solve (some) problems that computers can't may make such a comment appropriate. The section does smell of WP:OR, though. — Arthur Rubin | (talk) 20:28, 24 October 2007 (UTC)
Specifically, the article Roger Penrose, and probably the book The Emperor's New Mind, states:
This is based on claims that human consciousness transcends formal logic systems because things such as the insolubility of the halting problem and Gödel's incompleteness theorem restrict an algorithmically based logic from traits such as mathematical insight. These claims were originally made by the philosopher John Lucas of Merton College, Oxford.
Arthur Rubin | (talk) 20:32, 24 October 2007 (UTC)

My computer halted quite unexpectedly about a week ago. Haven't been able to start it since. It's been a big problem, and I still haven't solved it. --Ramsey2006 08:32, 11 November 2007 (UTC)

## Formal statement redux

The "computer science language" section is no good. I spent a long time this morning merging some of the informal language into the other section. There is not actually any difference between the halting problem in recursion theory and the halting problem in computer science (there is actually very little difference between recursion theory and computer science in the first place). The "computer science" part of the formal definition just needs to be merged into the other section (as I attempted did this morning). — Carl (CBM · talk) 01:54, 2 November 2007 (UTC)

Please note that the newer version does not delete the other version, but integrates it. There are not "two terminologies", one for computer science and one for recursion theory. The "proof" is no more or less vague than the proof that was previously in the "computer science" section. It is of course not completely precise, because I was trying to maintain the vague language of the computer science section in it. The main area of vagueness is with "subroutine" but I don't see how to avoid that without speaking of indices for computable functions, which I think were one of the things the "computer science" section was trying to avoid. — Carl (CBM · talk) 21:06, 5 November 2007 (UTC)

I reinserted the discussion that was reverted. The problem is that the quining discussion is different from the fixed-point discussion, and sidesteps many of the technical difficulties. If you do not understand the quining proof, please do not edit the quining proof.
The traditional methods of Turing et al is to not quine, but to put the source of the program on the tape. This avoids the discussion of quining, but it makes the proof harder to follow. It also seems to the reader who is unfamiliar with the fixed-point theorem to prove the superficially weaker theorem that the halting problem is undecidable for programs with input. In fact the halting problem is undecidable even for programs without any input.Likebox 00:19, 6 November 2007 (UTC)
I have twice attempted to combine the two sections in a manner that preserves the "computer science language". I would appreciate the ability to discuss it on the article talk page, rather than just seeing it reverted.
Regarding your latest edit, it seems you like the idea that your proof includes the word quine. But an actual quine just prints its source code to the output an then halts; it can't do anything else. What you are looking for is not a quine, it's a fixed point. — Carl (CBM · talk) 00:29, 6 November 2007 (UTC)
Why don't you merge the two sections, since you seem dissatisfied with my attempts? — Carl (CBM · talk) 00:33, 6 November 2007 (UTC)
On the contrary, I am looking for a quine not a fixed point. The reason I do not merge the sections is because the discussions are philosophically different and represent two points of view. There is no reason to infect one point of view with the diseases of the other and vice versa. There is no reason not to have two discussions, but there is a reason to defend the discussion which a vocal minority keeps saying is incomplete.Likebox 01:01, 6 November 2007 (UTC)
Please consider the first sentence of the article quine (computing). — Carl (CBM · talk) 00:55, 6 November 2007 (UTC)
Yes, yes. I know. The two concepts are exactly equivalent. You can use quines to prove the fixed point theorem, and you can use the fixed point theorem to prove quines. But the fact that they are equivalent mathematically does not mean that they are equally clear to a layperson. Any layperson can understand a quine and write one. It takes a mathematician to understand the fixed point theorem, even though their content is identical.Likebox 01:01, 6 November 2007 (UTC)
A quine prints it source code to the output and does nothing else. The program you want is not a quine, because it prints its source code to a variable and then does something with that source code. Viewing that variable as an input, the program you want is a fixed point. But it is not a quine.
The problem with having two definitions is it emphasizes a trivial difference as if it is significant. In areas such as this, there is no difference between recursion theory in computer science and recursion theory in mathematics. Both use terms like "fixed point". Both use the letter K to stand for the halting set. Having two parallel discussions makes it seems as if the two areas are significantly apart from each other. — Carl (CBM · talk) 01:05, 6 November 2007 (UTC)
I see. You are right. I didn't think about that. But the problem is that the proof of the fixed point theorem is difficult for a programmer who doesn't understand set-theoretic notation.Likebox 01:11, 6 November 2007 (UTC)

(unindent) look, although you are right that the discussion should be merged, the current proof is just plain wrong. It does not discuss the right function to get a contradiction. This is one of the annoying things I fixed in my rewrite. Sorry, I screwed up. The code "e" is the same as the code for g. This proof is right, but does not incorporate the fixed point theorem.Likebox 01:13, 6 November 2007 (UTC)

Yes I left that out as a sort of compromise. — Carl (CBM · talk) 01:17, 6 November 2007 (UTC)

What I can't understand in this whole bickering is why there needs to be a compromise at all? The proof I gave proves a (slightly) stronger theorem (it does not require an input) in a shorter text with a more intuitive argument that is more precise. I can't understand for the life of me why anyone would revert. I understand adding another proof closer to Turing's original proof, but I put that in too. I would like some neutral eyes to look over this dispute.Likebox 06:43, 6 November 2007 (UTC)

I would be very glad to have additional people comment. I have asked for comments from member of the math project. — Carl (CBM · talk) 12:50, 6 November 2007 (UTC)
What it's seems to be about is your (Likebox) misuse of the term quine and other related concepts. The halting problem is more naturally related to computer notation than to logical notation, but the argument is not more precise, and, in fact, has a few flaws. — Arthur Rubin | (talk) 14:43, 6 November 2007 (UTC)
To the above: there are no flaws in the proof, which can be found here. I would love for you to try to list the "flaws", Herr Professor Dr. Arthur Rubin, Putnam Laureate extraordinaire.
You are absolutely right that I am "misusing" the word "quine". I have taken the liberty to call a program a "quine" even when it prints its code and does something else too afterwards. Horror of horrors. I think there was one reader in Antarctica who got confused on this point for eight seconds about two months ago.Likebox 19:43, 7 November 2007 (UTC)
That's just a standard diagonal argument with Goedel numbers. It has nothing to do with a quine or the concept of quine. — Arthur Rubin | (talk) 22:17, 7 November 2007 (UTC)
Do you even know what a quine is? Have you ever written one? What diagonal argument are you talking about? You are talking nonsense.Likebox 22:32, 7 November 2007 (UTC)
See the existing statement for a diagonal argument; it's basically using the code number of the program as input to the program. — Arthur Rubin | (talk) 22:34, 7 November 2007 (UTC)
This is exactly the same outdated trick to avoid quining which Turing uses. Please learn what a quine is to see why this method is obsolete, and read my proof which assumes you know how to quine.Likebox 22:48, 7 November 2007 (UTC)
The price you pay for using the input to do the quining is that you don't prove that the halting program is undecidable for programs which have no input. This requires either "Kleene's fixed point theorem", or the equivalent modern concept of a "quine".Likebox 22:52, 7 November 2007 (UTC)
I'm finding your "quine" "code" incomprehensible. Kleene's fixed point theorem is not required for any of these arguments. I think you're using "quine" to describe the computer-science equivalent of a function applied to its name. — Arthur Rubin | (talk) 22:57, 7 November 2007 (UTC)

(deindent) I know you find it incomprehensible, but other people do not, so stop removing it.

I am using quine to describe a program that prints its own code. For example, the c-code

#include<stdio.h>
main(){char *s="#include<stdio.h>\nmain(){char *s=%c%s%c;printf(s,34,s,34);}";printf(s,34,s,34);}

I am also using quine to mean a program which includes a subroutine to print its own code. Like this:

#include<stdio.h>
main(){char * u=quine_me(u);puts(u);}
char * quine_me(){s1="#include<stdio.h>\nmain(){int i;char *u;quine_me(u);puts(u);}\n";
s2="%squine_me(char*u);s1=%c%s%c;\ns2=%c%s%c;return nsprintf(s2,s1,34,s2,34);}";return nsprintf(u,s2,s1,34,s2,34);}

(to actually compile this program you would need to flip quine_me and main, and you would need the right libc routine nsprintf that produces a new string, but close enough)

So if you have the program PREDICT(char * R), which ostensibly returns 0 if the C program contained in the string R halts and returns 1 if the C program contained in the string R doesn't halt, you write the program SPITE, in C:

#include<stdio.h>
main(){char *R = quine_me(); if(PREDICT(R)==0) while(1);else if(PREDICT(R)==1) return;}
PREDICT(){ HYPOTHETICAL PREDICT CODE}
quine_me(){ QUINING ROUTINE )

then run it, and this is a contradiction, because this code calculates PREDICT on the output of quine_me, and that's the same as calculating PREDICT on itself. No matter what the output of PREDICT, it does the opposite.

This proof is obvious to any CS person. It is also a proof that you can't decide whether a program with no input halts.Likebox 00:45, 8 November 2007 (UTC)

Don't condescend. The code of the quine itself is quite incomprehensible to anyone without careful examination, like most quines are, and it adds needless complexity since it's not necessary to exhibit a quine to make the argument as long as the reader is willing to believe that one exists (just link the article). I also think the others are correct that the quine is just a sort of informal "practical programmer" way of presenting a theoretical fixed-point function - which has some value, but more in an exposition of the theoretical method than as a separate method. Dcoetzee 01:32, 8 November 2007 (UTC)
I'm sorry for seeming to condescend, but this is the second page where people have attacked quine-based proofs as "unrigorous", and I don't think that these people have ever seen a quine. I didn't put anything like this on the page itself. This is just an explanation for the mathematicians who don't program. This quine is not that incomprehensible, actually, though, if you examine it closely.Likebox 03:52, 8 November 2007 (UTC)

--

This is incorrect:

The price you pay for using the input to do the quining is that you don't prove that the halting program is undecidable for programs which have no input. This requires either "Kleene's fixed point theorem", or the equivalent modern concept of a "quine".Likebox 22:52, 7 November 2007 (UTC)

Once the halting problem for the case of programs with input has been proved undecidable, the case of programs with no input is treated easily with neither quines nor fixed-point theorem. The following is adapted from Minsky's old book (Computation, "The blank-tape halting problem", pp.150-151) ... For each pair (P,x) of program P with input x, merely construct from (P,x) the no-input program QP,x ::= return P(x). If (A) the halting of any arbitrary no-input program were decidable, then (B) the halting of any arbitrary QP,x would be decidable, and hence (C) the halting of arbitrary (P,x) would be decidable. Since (C) is undecidable, so is (B), and therefore so is (A). The presentation is much cleaner and more self-contained wihout introducing quines, etc.--r.e.s. 07:55, 8 November 2007 (UTC)

R.e.s. is correct, I looked it up.
RE "quining": I'm sure all involved know Turing's original proof (The Undecidable p. 132-133), but it's worth repeating here because it shows why "quining" is yet one more notion no one needs to understand this article or G's first incompleteness theorem. BTW: Those of us who choose not speak C should be able to still follow this argument (this argument itself is an algorithm that proceeds from one line to the next unless a "goto" occurs):
This proof asserts the existence of an algorithm named computation machine H (Hypothetical). H contains two machines D (Decider) and U (Universal -- it can run numbers that D deems are wffs). H starts with a blank tape.
• 1 H clears a "successful-number"-counter R that holds the tally of successful-numbers r; 0 => R.
• 2 H clears a counter N that will hold the current number-under-test n; 0 => N.
• 3 H increments the contents of N: [N]+1 => N.
• 4 H prints the number n on its tape and then turns control over to D for a determination as to whether n represents a "circular" or a "non-circular" algorithm; a circular algorithm ends locked ad infinitum in a inescapable loop (e.g. 1 GOTO 1; 2 Halt).
• 5 D first determines if n represents a "well formed" algoritm (a "wff") (these (Turing's) wffs are computation machines (algorithms) that print out decimal digits ad infinitum).
• 6 IF "n" is a wff THEN D waves its magic wand over n and determines if n is "circular" or not. Given the outcomes of the wff question and the circularity test, D prints either "u" (unsatisfactory, is circular) or "s" (satisfactory, is not circular) and passes control back to H.
• 7 H looks at the tape where determination "u" or "s" is. IF "u" THEN H goes to step 3, ELSE,
• 8 IF "s" THEN H increments r, i.e. [R]+1 => R.
• 9 H uses sub-machine U (Universal) to "run" n, (now known to be a non-circular wff).
• 10 U under the watchfuleye of H "runs" n and together they produce a sequence of decimal digits out to the rth digit ( [R] = Σ(successful numbers) = r ).
• 11 H prints the rth digit on the tape.
• 12 H goes to step 3.
Thus H is not circular. Now, what happens when H comes to its own number K? D returns "s". H uses U to "run" its own number K out to r digits. This means that K-aka-H starts over with [N]=1, then tests 2, then tests 3, .... exactly what H did. As this constitutes a circle contrary both to hypothesis and to D's decision about K, the outcome is a contradiction. Q.E.D.
I'm pretty sure a similar argument can be used to design a proof of G's first incompletentess theorem, but I'm not quite there yet. Bill Wvbailey 17:51, 8 November 2007 (UTC)
The fact that you looked it up has no bearing on whether he is correct. People say stupid shit in the literature all the time. But he is nonetheless correct, as I noted because I thought about it for about 10 seconds.
What he is missing is that this is the proof of the Kleene fixed point theorem, and it requires an extra paragraph of exposition. The existence of a quine is equivalent to the fixed point theorem, and the fixed point theorem is equivalent to the "incorporation lemma" that a computer can simulate a code that is stored in a variable. The point is that the proof I gave automatically proves the no-input halting lemma without using any second paragraph, and in the same breath as it proves the with-input halting lemma. And it does so in about eight words. You can't improve on it, and since none of you know how to program, you should just leave it alone, and go muck around in your own sections where you pretend to understand things.Likebox 18:35, 8 November 2007 (UTC)
Please avoid personal comments about other editors ("can't program", "pretend to understand"). It appeared to me that Wvbailey was only saying he had verified R.e.s.'s claim that this is due to Minsky.
The proof given by R.e.s. above uses the fixed point theorem in no way, and doesn't prove it either. R.e.s.'s proof uses the s-m-n theorem to give a 1-reduction from the set { (x,i) | x halts on input i } to { y | y halts on no input}. The fixed point theorem (Kleene's recursion theorem) is not at all the same as the s-m-n theorem. They are equivalent only in the sense they are both true. — Carl (CBM · talk) 18:50, 8 November 2007 (UTC)
Yes, I was just fact-checking. Wvbailey 19:13, 8 November 2007 (UTC)
You're right--- I shouldn't make personal comments. The truth is, most of the people who modify these pages understand very well that all my discussions are correct, but they are deathly afraid that code will become a common method of proof in mathematics.Likebox 20:35, 8 November 2007 (UTC)
I was hoping that the proof I sketched above would appeal to you (Likebox) as a programmer, although it doesn't involve or prove anything about quines or fixed point theorems; rather, imo it does give some "programmer's insight" into the situation — something at which I think Minsky was unexcelled. In particular, for a given computational model, it focusses on what a programmer could do to construct the no-input program QP,x ::= return P(x); namely, QP,x can be seen as effectively just the "source code" for P, very slightly modified to use the constant x "hardwired" in place of variable input. (Minsky's discussion is about Turing machines, so there the modification is to get the no-input TM QP,x from a TM P that takes input x: the first few 5-tuples for QP,x simply write the constant x onto the blank tape and reproduce the initial conditions of P with x as input, then the remaining 5-tuples of QP,x are just a literal copy of the relevant 5-tuples of P.) QP,x does not output or reference itself, so no quine or fixed point theorem is involved. --r.e.s. 20:38, 8 November 2007 (UTC)
You're right--- your proof is fine. But it already appears in the recursion theorem page. I don't know why you think I'm a programmer. I'm not. I'm a physicist.Likebox 20:58, 8 November 2007 (UTC)
Of course it doesn't appear there — if you really believe otherwise, please cite the exact place on that page where you think this proof occurs. And, as already explained, this proof does not involve any fixed point theorem — if you really believe otherwise, please quote some specific portion of the proof that you think involves Kleene's f.p. theorem. --r.e.s. 21:41, 8 November 2007 (UTC)

What I was saying was that your method of using the input inside the code to run a simulation is exactly the same thing as the proof of the Kleene fixed point theorem. From that page (it's called the second recursion theorem in that page):

Let f be any computable function and let h be the computable function such that ${\displaystyle \phi _{h(x)}\simeq \phi _{\phi _{x}(x)}}$. That is, on input x the function h first computes ${\displaystyle \phi _{x}(x)}$ and, if that computation gives an output, uses that output as the index of a computable function to be simulated. Take e to be an index of the composition ${\displaystyle f\circ h}$. Then ${\displaystyle \phi _{\phi _{e}(e)}\simeq \phi _{h(e)}}$ by the definition of h.
But because e is an index of ${\displaystyle f\circ h}$, ${\displaystyle \phi _{e}(e)=(f\circ h)(e)}$, and thus ${\displaystyle \phi _{\phi _{e}(e)}\simeq \phi _{f(h(e))}}$ Thus ${\displaystyle \phi _{n}\simeq \phi _{f(n)}}$ for ${\displaystyle n=h(e)}$.

The construction of the code in this proof runs ${\displaystyle \phi _{\phi _{x}(x)}}$ which is the same exact thing you are doing by simulating X(Y) inside your function. It's not exactly the same, because you simulate X(Y), and Kleene is simulating X(X). But, give me a break. All these proofs are so trivial that its hard to say whether A is independent of B in any objective way. Maybe there is some subtle difference between your presentation and Kleene's, but it just doesn't seem any different to me. I have no objection to this proof, but simulating a code with input inside another code is the exact method by which you prove the recursion theorem.

My only real complaints on this page are about the obfuscation inherent in avoiding writing code in C or FORTRAN or pseudocode. If you write code using recursive functions, you make the proofs impossible to follow, even for mathematicians.Likebox 23:43, 8 November 2007 (UTC)

The proof I posted certainly does not appear in what you quoted from the other page, and what you call "the same exact thing" is not that at all. The proof I posted does not rely on Kleene's fixed-point theorem, it involves no functions like ${\displaystyle \phi _{x}(x)}$ that take their own index as argument, and there is no "quine" function such as ${\displaystyle \phi _{p}(.)=p}$. However, although this proof does not appear on that page, a piece of it might be extracted from the Corollary stated there: the construction of my "QP,x ::= return P(x)" can be obtained by taking their "Q(x,y)" to be ${\displaystyle \phi _{P}(x)}$, leading to ${\displaystyle \phi _{x}(.)=\phi _{P}(x)}$. Since this ${\displaystyle \phi _{x}(.)}$ just ignores its argument, it can be identified with my QP,x. I think this is a roundabout way of getting to the s-m-n theorem, which, as CBM mentioned, is really what's involved in the general case of this proof.--r.e.s. 02:41, 9 November 2007 (UTC)
Ok Ok. So I screwed up. Sorry.Likebox 05:54, 9 November 2007 (UTC)
I want to also point out that the reason I put the new proofs is because I get so confused about these s-n-m theorems and fixed point theorems, because they mix up the code and the index in ways that are completely non-obvious to me. That's why I don't like them--- I have a hard time keeping them in my head. I always have to translate to C and then translate back, and it's time consuming, and its frustrating. And I can prove all the theorems that they are used for just by using something obvious like quining instead, so I think all this mental anguish is unnecessary.Likebox 06:09, 9 November 2007 (UTC)

## What Is A Rigorous Proof?

You people keep saying that a code, like the quine I wrote above, is not "rigorous". But if you look inside any of the recursion theorem proofs, or the s-n-m proofs, you will see that the algorithm of the codes are not written in any precise way at all. They are just vaguely described in english.

So I assert that all this recursion theory exposition is imprecise, and does not constitute a proof. Your books are giving vague, handwaving arguments. The proof that I give is the only fully rigorous proof, because its the only one where you see the exact code explicitly.Likebox 18:34, 9 November 2007 (UTC)

It was common to completely describe the code for functions that are claimed computable for a long time; typically this was done by demonstrating the recursion equations that define the function. When Rogers' book in recursion theory came out in the 1960s, it employed the revolutionary technique of describing functions by explaining how those functions could be constructed instead of actually giving the recursion equations that define them. This idea, somewhat radical at the time, was received enthusiastically, and became the standard method for proving the existence of various computable functions. — Carl (CBM · talk) 19:08, 9 November 2007 (UTC)
The proof currently in the article uses only a few facts about computable functions: they are closed under composition, there is a function that halts on input '0' and not on input '1', and each computable function has an index. — Carl (CBM · talk) 19:18, 9 November 2007 (UTC)
The fact that mathematicians were enthusiastic to make their proofs less precise does not change the fact that they were made less precise. It just shows that they are paying a heavy price for rejecting programming languages.Likebox 20:03, 12 November 2007 (UTC)

## The Halting Problem as a Set

The article read "this computable function simulates all programs on all inputs in parallel, in a manner similar to a multithreaded computer program, and notices whenever one of the programs being simulated halts." This is not accurate as you cannot start all programs at the same time. You have to do diagonalization where the countable set of programs (or program input pairs) is listed on one axis and the possibly countably long runs on the other. I added a somewhat awkward sentence to this effect. —Preceding unsigned comment added by 129.27.202.101 (talk) 10:46, 26 November 2007 (UTC)

I've removed it, (although I apologize for marking it minor), as any implementation of parallel/multithreaded processing on a single processor machine is non-simultaneous. Some diagonalization needs to be done to ensure that all steps of all processes are considered, but "simultaneous" is not the problem. — Arthur Rubin | (talk) 15:05, 26 November 2007 (UTC)

## Proof is invalid.

I added a section that explains why this proof is invalid. Not sure why this page exists at all since it's so obviously wrong. 1936? Wow! It's been wrong for a long time, eh? - Vorlath Nov 29, 2007 8PM

From the article:

Unfortunately, this proof is actually invalid for the following reasons. i is defined as calling f. This means that if f cannot exist, then neither can i because it would have nothing to call and the program cannot execute and its definition invalid. What the proof actually says is that f cannot exist for programs that do not exist. That's a rather meaningless conclusion for a proof. In fact, this proof cannot succeed under any circumstance. In order to succeed, the contradiction must exist for i and only i. But the program in question f must remain valid at all times and this proof does not allow that. —Preceding unsigned comment added by 99.251.246.204 (talkcontribs) 2007-11-29T23:54:33

The proof is valid. The outline is: if f exists, then g exists. Moreover, the existence of g is impossible, because it leads to a contradiction. Therefore, f does not exist. — Carl (CBM · talk) 00:06, 30 November 2007 (UTC)
The proof is invalid because g need not exist AS DEFINED if f does not exist. It does not mean there is no g. It just means there's no g that takes into account a non-existent f. Hence no contradiction.
As stated in the opening paragraph of the section,the proof is by Reductio ad absurdum, a common technique in mathematics. One assumes the existence of f. That assumption is in effect until a contradiction is reached, at which time we realize that our assumption was false all along. --Ramsey2006 12:52, 1 December 2007 (UTC)

Evidently, many readers have trouble understanding the reductio ad absurdum form of proof, so I'm wondering if it would be worth having a constructive proof (for TMs, say) in the article. (?) --r.e.s. 23:59, 1 December 2007 (UTC)

I'm not sure what you mean by a direct proof - even if you use a concrete model of computation, via some fixed point construction, you would still start by assuming that there is a procedure that does solve the halting problem. — Carl (CBM · talk) 00:27, 2 December 2007 (UTC)
I'm referring to constructive proofs — proofs that do not use reductio ad absurdum; i.e., they do not start by hypothesising the existence of a procedure that solves the halting problem.--r.e.s. 00:56, 2 December 2007 (UTC)
If the proposition is that an algorithm doesn't exist, it should be pretty obvious that one can't construct it, so there can't be any constructive proof. Any proof of a non-existence statement has to be a reductio ad absurdum proof. --Lumidek (talk) 21:05, 3 November 2008 (UTC)
Um, this is a constructive proof, as the term is ordinarily understood. Now, there's no agreed precise definition of "constructive proof", so there's a tiny window for r.e.s's definition I suppose, but it's certainly not the ordinary one. In constructive or intuitionistic mathematics, reductio ad absurdum is generally not considered a valid method of proving positive existential statements, but it is a valid method (in some sense, the only method) of proving negative statements. --Trovatore (talk) 21:12, 3 November 2008 (UTC)
More pointedly: in intuitionistic systems, the formula ${\displaystyle \lnot \phi }$ is often an abbreviation for ${\displaystyle \phi \to \bot }$; thus to prove ${\displaystyle \lnot \phi }$ you assume φ and prove a contradiction. Also, many intutionistic systems include the deduction scheme ${\displaystyle \bot \to \psi }$ for every formula ${\displaystyle \psi }$, so one has to be careful when saying that intuitionistic systems don't include proof by contradiction.
Also, the proof in the article at the moment is not a proof by reductio ad absurdum. It does not assume that any computable function does correctly decide the halting problem. Rather, it shows that for every computable function there is at least one value of e for which f(e,e) differs from h(e,e). Moreover, because the construction is explicit, it is possible to uniformly compute e from an index of f. The proof that r.e.s. links to below is actually the same proof, I believe, just written in a completely cryptic manner. — Carl (CBM · talk) 22:19, 3 November 2008 (UTC)

I'd like to see one (if there is one; I've never seen one). Turing framed his original as reductio, although it is quite constructive in all but the assertion at the beginning: He asserts a circle-tester machine that tests all counting numbers, ad infinitum. His machine starts at 1 and tests each number: looks to see if the number is a well-formed number (one that can compute an infinite sequence of decimal digits), tests the well-formed number for "circular" or "not circular", keeps a tally "r" of the successful numbers, uses its universal sub-machine U to run each non-circular number as it is found, and prints out its rth decimal digit. But it finally chokes when it comes to its own number: it comes to a decision ("not circular") and increments the tally counter r --> r+1 but because it has to "run" its own number out to the r+1th digit, it never progresses beyond an infinite regress (i.e. circle) of reproducing itself to start from 1 and test each number in sequence etc etc. It never gets to print the (r+1)th digit, and thus it is locked in a circle. Contrary to the assumption.
I think this is a great proof. One could frame it as: "We built this really cool gizmo to look for "loops" in programs and now look: it fails by falling into a loop! Moreover, no matter how it is built and no matter who builds it (even the almighty, even Bill Gates himself), it will always fail, now and forevermore until the end of time...." What makes any reductio proof suspect is: We the reader must believe that the proof works for every conceivable asserted instance (i.e. every circle-testing machine, no matter how cleverly designed by any number of talented designers (man, machine or metaphysical being)). This presumption is definitely a stretch, and offends the intuitionists among us. Bill Wvbailey 01:00, 2 December 2007 (UTC)

For the type of constructive proof I have in mind, see, for example, this Aug 2004 posting to sci.logic by peter_douglass: http://groups.google.com/group/sci.logic/msg/724773f01a731a74 The cited proof could be shortened for the present article, and (although it would run counter to the author's purpose of having a more-abstract proof) it could also be specialised to TMs. --r.e.s. 01:22, 2 December 2007 (UTC)

## Constructive proofs of the halting theorem

Here's an attempt at a shortened version (for TMs) of the proof posted by peter_douglass cited above:

Let T1, T2, T3, ... be an enumeration of all Turing machines (TMs), so we can refer to any Tm simply by its index m; e.g., m(...) denotes Tm started with (...) on its tape, where (...) is an encoding of a tuple of natural numbers. WLOG, we assume that every Turing machine has 0 and 1 among the symbols of its tape alphabet.
Halting Theorem:
No TM computes the total function h: N x N -> N,
h(m,n) ::= if m(n) eventually halts then 1 else 0.
Proof:
The following TMs have straighforward constructions (omitted):
(1) Dup, which, started with (n) on its tape, halts with (n, n) on the tape.
(2) Write_1, which simply writes 1 in the current cell and halts.
(3) Loop_forever, which never halts.
(4) Composition, (m1 ; m2): given any two TMs m1, m2, (m1; m2) denotes the TM composed of m1 and m2, in which every halting state in m1 is replaced by the initial state of m2. Thus (m1; m2) first runs m1, and transitions to the running of m2 when and if m1 would have halted.
(5) Conditional, (m1 | m2): given any two TMs m1, m2, (m1 | m2) denotes the TM that just runs m1 if the current cell contains 0, and otherwise just runs m2.
Given any TM, say m, the following TM is readily constructed using (1)-(5):
m' = (Dup; m ; (Write_1 | Loop_forever)).
Now note that, for all x in N, if m'(x) does not halt, then either
(a) (Dup; m)(x) does not halt, OR
(b) (Dup; m)(x) halts with 1 in the current cell.
Therefore, for every m, exactly one of the following cases must hold:
(1) m'(m') halts; or
(2) m'(m') does not halt (two cases):
(2a) (Dup; m)(m') does not halt;
(2b) (Dup; m)(m') halts with 1 in the current cell.
In case (1), m(m',m') halts with 0 in the current cell, which is not the value of h(m',m').
In case (2a), m(m',m') does not halt, and so does not compute the value of h(m',m').
In case (2b), m(m',m') halts with 1 in the current cell, which is not the value of h(m',m').
Thus, for every m, there is a value of h that Tm does not compute; consequently, no TM computes h. Q.E.D.

It's still a bit lengthy — maybe someone has suggestions for futher shortening it? --r.e.s. 01:38, 2 December 2007 (UTC) EDIT: Corrected a mismatch in the reading & writing of 0/1 (I had their boolean roles reversed). --r.e.s. (talk) 21:48, 5 December 2007 (UTC)

If all you want is "Thus, for every m, there is a value of h that Tm does not compute; consequently, no TM computes h. ", you could easily change the proof in the article to achieve that. Instead of claiming f is supposed to compute the halting function, just let f be an arbitrary program, and the proof in the article shows that either f does not halt on the index of the function g, or else f does halt but gives the wrong value for the halting function for g. The proof itself is not made more or less constructive by whether f is taken to be an arbitrary machine (which is shown not to compute the halting function) or a machine which computes the halting function (which leads to a contradition). — Carl (CBM · talk) 02:04, 2 December 2007 (UTC)
Since the change is so simple, I went ahead and implemented it in the article. I don't believe it's an overall benefit to use the algebraic Turing machine notation above. — Carl (CBM · talk) 02:15, 2 December 2007 (UTC)
A proof without reductio ad absurdum is a suggestion offered for consideration, but it's not necessarily "what I want". You seem to think a proof that relies on reductio ad absurdum is just as constructive as one that does not. In any case, if twiddling the proof already in the article accomplishes the same goal, I don't see any problem with that, if you prefer that route. --r.e.s. 02:26, 2 December 2007 (UTC)
Sorry about the "you" - it should be "one", since it was meant to refer to a generic person. In general, proofs that proceed by contradiction can be less constructive than those that do not, but they don't have to be. Any direct proof can be trivially converted to a proof by contradiction by first assuming the negation of what is to be proved, then proving it, and then pointing out the contradiction. I wouldn't say the resulting proof is less constructive.— Carl (CBM · talk) 03:48, 2 December 2007 (UTC)
The point of my suggestion was to consider whether a proof without reductio ad absurdum (whatever we call that kind of proof) might be less confusing to many readers. I don't think the tinkering that has so far been done in the article is quite satisfactory, and some loose ends need to be fixed, if this tinkering is the prefered approach. (Some of the loose ends seem to be now-out-of-place remnants of the former r.a.a. proof.) For example, it should be made clear throughout that the "arbitrary computable function f" is arbitrary only among those that are total and also have range {0,1}. But the proof-sketch also shows a general confusion between computable functions, the programs that compute such functions, and the indices of either of these. When I made a little attempt at rewriting it the way I would prefer, I found myself producing pretty much a non-r.a.a. form of what I regard as the less-muddled proof-sketch that used to be in the article (a year ago?) — the style that leans more towards expressing as much as possible in terms of programs rather than the equivalent results in terms of computable functions. (Comment revised)--r.e.s. 07:59, 2 December 2007 (UTC)
I did copyedit it a little to further emphasize fis total, and to clarify what to do it f returns something other than 0 or 1. I don't see the confusion between functions and programs. The proof constructs a computable function g informally, then uses the fact that the programming language was assumed to be Turing-complete to conclude there is an index for that function. After that, the proof never mentions the function g again. This is perfectly in line with contemporary methods of proof in recursion theory. It would be clearer to people familiar with the area to excise the word 'program' entirely, but apparently it's clearer to newcomers if they can reinterpret the proof in terms of compute programs, while people with more background interpret it in terms of indices for computable functions. — Carl (CBM · talk) 12:50, 2 December 2007 (UTC)
There's a category error involved — phrases like "returns a value", "uses (or is used as) a subprogram", "halts", etc., apply to a computer program (an object having syntax & semantics, etc.), but they do not apply to a computable function (a kind of mathematical relation). Although there's not much confusion introduced by the convention of referring to a program by its program-number (or doing the same with a computable function and its index), IMO it definitely is a mistake to confuse computable functions with programs or "function procedures". --r.e.s. 15:04, 2 December 2007 (UTC)
I completely agree that it would be technically more precise to excise the word 'program' and work solely in terms of computable functions and their indices, as would be done in the literature. But I think the terminology in terms of programs is more suggestive, at the cost of being imprecise in a way that is unlikely to cause genuine confusion. I don't think the proofs that work directly with a particular model of computation, such as Turing machines, are better - they obscure what's going on by focusing excessively on the details of the model of computation. — Carl (CBM · talk) 15:23, 2 December 2007 (UTC)
I think that some confusions are less harmful than others. Confusing the distinction between indices and programs is totally harmless, but confusing programs with computable functions is not. Computable functions correspond to equivilence classes of indices, and what could make this particular confusion harmful is the fact that the equivilence relation is itself nonrecursive. Programs and indices are defined intensionally, but computable functions are defined extensionally. Indeed, the whole point of the uncomputability of the Halting Problem is that extensional behavior cannot be predicted by an inspection of the program or index. --Ramsey2006 15:48, 2 December 2007 (UTC)
I'm skeptical of Carl's assertion that programs "obscure what's going on" and that technical precision would necessarily be gained by excising them from the proof. In fact, programs — or their equivalents in some particular mathematical model of computation, e.g. if not TMs then some other Turing-complete model such as the mu-recursive functions — *cannot* be excised from the proof except by disguising their fundamental role through the use of terms like "computable function". It's fine to state the halting theorem as "There is no computable function such that ...", but the meaning of that statement fundamentally reduces to something equivalent to "In every model of computation in a certain class, there is no input-to-output function such that ...". IOW, although the computable functions may have a sort of "platonic existence" as "pure" mathematical relations, their very definition is an *operational* one involving abstract finitary processes. A prime example is Rodgers' acceptable enumerations of the mu-recursive functions (which really seems to be what Carl is wanting to take as starting point). But the mu-recursive functions are just one of the equivalent Turing-complete computational models. (Last week I edited the acceptable programming system page to be in terms of Turing-computable functions, rather than just mu-recursive functions, partly for these reasons.) There's an excellent article by R. I. Soare, "Computability and Recursion", that elucidates much of this. --r.e.s. 18:24, 2 December 2007 (UTC)
(Continued below) — Carl (CBM · talk) 19:13, 2 December 2007 (UTC)
Regarding Wvbailey's edit summary, "fix a ..." is common math jargon that means stick one with a pin so it won't go anywhere for the rest of the proof. — Carl (CBM · talk) 03:48, 2 December 2007 (UTC)
(edit conflict) I agree. It is clear that the index of g is computable from the index of f (and this fact may be worth mentioning), so that the failure of any Turing machine to compute the halting problem is about as constructive as one can imagine. --Ramsey2006 02:19, 2 December 2007 (UTC)

I think Carl's edit is on the right track. But as a non-mathematician I am imminently qualified to fail to understand it at first and second readings (it was too terse and ambiguous in the first part). So I expanded it where I had troubles (in failure there is often opportunity, so if I failed to get the following right, there will be opportunities... :-)

The proof shows there is no total computable function that decides whether an arbitrary program (represented by its code number j) halts on arbitrary input number k, by returning 1 if the program does halt and 0 otherwise. Here program refers to a program in a fixed Turing-complete model of computation. Turing completeness means that every computable function is calculated by some program in the language. In the following, f and g are Turing-machine programs, and f will be a subroutine of g.

Fix Begin with an arbitrary Turing computable function f of two variables j and k, i.e. f(j,k) where j and k are to be filled in with positive integers. The key idea is that because, given any j and k, f is computable it can be used as a subroutine for another program g; by the definition of "total function" f always returns a number to g.

Supervisory function g creates number j and tests it to see if it is a "well-formed" code number of a legitimate Turing machine. If so, g creates parameter k (starting from 1) for decider-function f to test together with code-number j; in turn g receives the decision from f. (g uses clever zig-zig tricks through the (j,k) matrix of numbers are used so that, for a given code number j, g need not run parameters k out to infinity.)

The number j represents the code number (the "index") of the jth Turing machine in an enumeration of all possible Turing machines. Supervisor program g begins the enumeration of j at "1", then goes to "2", then continues to create j ad infinitum with the zig-zag trick alluded to above. g has no trouble doing this incrementing and simple housekeeping, and it can decide, in all cases, whether or not the code-number j under test is well-formed enough to proceed with the test, i.e. whether or not j represents a Turing machine program. (Proof of this is not provided here). If not, g goes on to create the next number j etc.

In the remainder of this proof, f will be a (more precisely, be one instance of any possible infinitude of) "halting function". That is, f decides whether program j, given input k, will halt. f returns 1 if program j halts on input k, otherwise it returns 0.

In the particular cases when j = k = i, supervisory function g takes the output from f(j,k) = f(i,i) and does either of the following:

• (2a) If f returns 1, g immediately goes into an infinite loop.
• (2b) If f returns 0, g immediately halts.

Thus, regardless of what function f is, the behavior of function g is well defined.

Now g is a computable function, so it too has some Turing machine code-number e. The next step is to show that function f will not give the correct answer about whether program e (corresponding to function g) halts when run with input e, i.e. f(e,e) will produce a wrong answer. What happens when e is run with input e? First, it will ask f whether program e is supposed to halt on input e. If f says not to halt, the program does halt. If f says to halt, the program does not halt. In either case, f was not correct about whether program e halts when run with input e. Since f was an arbitrary total computable function, the conclusion is that no total computable function will give the correct answer for every program.

I hope the above is correct. Bill Wvbailey 04:37, 2 December 2007 (UTC)

g does not create code numbers or do any sort of zig-zagging. It takes a number i as input, computes f(i,i), and acts accordingly. The goal of the slight changes today is that we no longer assume f decides the halting problem. IT's just some specific, fixed function, from which g is defined. — Carl (CBM · talk) 04:50, 2 December 2007 (UTC)

I've taken a crack at your emended version. Now the only thing that f can return that is acceptable is 0.

The proof shows there is no total computable function that decides whether an arbitrary program [represented by code-number ??] i halts on input x, by returning 1 if the program [i] does halt and 0 otherwise. Here program refers to a program [code number ?? Is some of this in the instruction table and some on tape to be run by a U?] in a fixed Turing-complete model of computation. Turing completeness means that every computable function is calculated by some program in the language.

Fix [start with] an arbitrary computable function f [Where does f come from? where is it located? Is it on the tape as code to be run by a universal machine in g? Or is it in the instruction-table?]. The key idea is that because f is computable, it can be used as a subroutine for [or is it a coroutine of ??] another program. There is thus a computable function g taking one input [i] that performs the following actions: given input i, g first computes f(i,i) [It seems that g passes i to f, i.e. puts i in a place where f can find it, and then g "calls" f; f in return passes back {0, NOT_0} in a place where g can find it. It would seem that g either runs f with its on-board universal U, or f is in the instruction table as a coroutine more than a subroutine, i.e. f takes control until it returns control back to g]. If f were a function that computes the halting problem, this would tell whether program i is supposed to halt on input i. If f returns 0, g immediately halts. If f does not return 0, g immediately goes into an infinite loop. Regardless of what total function f is, the function g is well defined. [This is much better]

Now g is a computable function, so it is assigned some program number e [thus e contains the code numbers of g and f]. The next step is to show that function f will not give the correct answer about whether program e (corresponding to function g) halts when run with input e [this definitely demands that somewhere there's a U that runs e]. What happens when e is run with input e? First, it will ask f whether program e is supposed to halt on input e. [But why does e necessarily represent the number of a halting program? g just got 0 from f, and this is not necessarily a halting indication.] If f says not to halt, the program does halt. If f says to halt, the program does not halt. In either case, f was not correct about whether program e halts when run with input e. Since f was an arbitrary total computable function, the conclusion is that no total computable function will give the correct answer for every program.

Clearly I'm still confused. It's almost as if the proof sketch is too terse. While I may be bringing too much baggage to the proof, I guarantee these are questions that others will ask, because they'll be bringing their baggage too. For example: Somewhere a U (universal machine) exists that is running i with input i, and e with input e. Anyway at least now there are only two alternatives { 0, NOT_0 }, although 0 does not necessarily mean that f is a halting function. In fact f could ignore i altogether and be as simple as: "PRINT 0 in the place where return parameter should go; return to g". Bill Wvbailey 18:24, 2 December 2007 (UTC)

Is the proof by Soare below more clear to you? Perhaps we could replace the proof in the article with a slightly expanded version of that. — Carl (CBM · talk) 19:16, 2 December 2007 (UTC)

No. Sorry. Too terse. Too absract. Is K a set? What would it mean for a set to be "recursive"? Is this in any way similar to the proof that appears in Kleene 1952:272-273 where he demonstrates that if we enumerate all the number-theoretic functions of one variable, i.e. φ0(a), φ1(a), φ2(a),.... then φa(a)+1 is not in the enumeration and therefore not primitive recursive, and by extension neither is φa(a,a)+1, etc.? Because if it is the same, this one I understand because I can see it in terms of diagonalization. How that relates to a Turing machine halting is not obvious. But it is a nifty because it is so easy to see. Kleene's example goes on to state that the predicate "1∸φa(a,a) takes only 0 and 1 as values, and cannot occur in the above enumeration, so it is the representing function of a predicate not primitive recursive. (Skolem 1944)". The table below used the predicate "=IF(1-diag>0,1,0)" to create the 1's and 0's (the matrix was gotten from "=INT(5*rand())"):

In general, f13(13) = g(13) should evaluate to the same number, as they are identically the same function. The only conclusion one can arrive at is that this evaluation does not exist. (The numbers in the look-up table and the evaluations of the functions are fictitious examples meant only for illustration. One can expect (most of) the actual numbers to be enormous. Wvbailey (talk) 23:15, 5 December 2007 (UTC)
U is a universal Turing machine that can "run" well-formed code-number #f with input number x. The resulting f =def U(#f, #f) yields the same output as f(#f). Function g has a U in it. When presented with an input number #f, g puts this on the tape as both code-number #f and the parameter x=#f. Then U runs code-number #f with parameter #f. This sub-machine f =def U(#f, #f) may produce an output 0 or not, i.e. ~0. If f (i.e. its code-number #f) represents a well-formed and total function it will eventually produce an output { 0, ~0 }. This output goes to a characteristic function (i.e. a predicate ) that produces { 0, 1 } by use of "proper subtraction", i.e. (1 ∸ { 0, ~0 }. Proper subtraction both limits ~0 to 0 and inverts it to 1; it merely inverts 0 to 1. Thus, from f(#f) = { 0, ~0 }, the characteristic function produces { 1, 0 }, respectively. Because g has changed every diagonal value (#f, #f), g 's output, i.e. the function g = g(f(#f)), cannot be contained in any enumeration (i.e. row of values of f(x) for all numbers x). So when the numeral for g, i.e. #g, is input as if it were just another choice for f, i.e. f = g, the output of this function g(U(#g),#g), cannot be computed -- neither 1 nor 0 is possible, the { 1, 0 } decision is impossible. g is said to have "diagonalized out" of the enumeration (Enderton 2001:186). Wvbailey 20:51, 4 December 2007 (UTC)

 i i i i i i 1 2 3 4 5 6 j 1 3 4 1 3 2 1 j 2 2 2 1 3 0 1 j 3 0 0 0 2 1 1 j 4 1 2 1 2 1 3 j 5 4 1 1 1 0 4 j 6 2 1 1 0 0 0 diag 3 2 0 2 0 0 1∸diag 0 0 1 0 1 1

The link between "halting" and such a diagonalization table would have to be made clear. I guess I don't understand why avoiding enumeration and diagonalization would be a preferred approach. Us visual-types can see this (rather) clearly, especially with an example to look at. Bill Wvbailey 22:55, 2 December 2007 (UTC)

The proof by Soare is very similar to that. Suppose that ψn is a listing of all the partial computable functions. Define a function g as follows: on input e, g returns ψe(e) + 1, if this is defined, and g(e) is undefined if ψe(e) is undefined. This function g is a partial computable function, and its existence is not contradictory, because of the following argument: suppose that e is an index for g, that is, the function g is the function ψe. Then on input e, if g(e) exists it equals ψe(e) + 1, which is g(e) + 1, which is impossible. Thus we can conclude that g(e) is undefined, but that isn't a contradiction. It just means there are some blank spaces on the diagonal, corresponding to function/input pairs that don't halt.
Suppose that we had some way to decide whether ψe(e) is defined, as a function of e. Then we could redefine g as follows. If ψe(e) is defined, g(e) is ψe(e)+ 1, and otherwise g(e) is 0. Now g is guaranteed to be defined for every e, and thus taking e2 to be the index of the new g, ψe2(e2) + 1 is defined, and so we reach a contradiction. The erroneous assumption is that there is a function that correctly determines whether ψe(e) is defined as a function of e. The fact that no such function exists is exactly the nonexistence of the halting problem. — Carl (CBM · talk) 02:10, 3 December 2007 (UTC)

Question: Kleene shows an example of reductio ad absurdum requiring the "discharge of double negation ~~A => A" as intuitionistically unacceptable (p. 99). Is reductio always intuitionistically unacceptable? Is there a way to frame the proof so we assume P and then derive both Q and ~Q i.e. a contradiction by following two paths through the proof? E.g. I did that sort of thing in geometry (way back when Kennedy was president), on two sides of a vertical line... work down from premises on both sides of the page until (you hope) you come to an equivalence. Here we want a contradiction. Bill Wvbailey 23:12, 2 December 2007 (UTC)

Things that are equivalent classically are not always equivalent intuitionistically. In intuitionistic logic as it is usually studied in proof theory (which I believe includes Kleene's system) it is true that from A and ~A you can prove any formula B. But it is not true in this system that ~~A proves A for every formula A. These are equivalent classically, and people often conflate them. But they are distinct intuitionistically. Think about the proof that ~~A implies A - the natural classical proof immediately uses the principle of the excluded middle. — Carl (CBM · talk) 02:10, 3 December 2007 (UTC)

Question: Is the drawing and the caption correct? The odd thing here is "halting" never comes into play. I could make another drawing with an output looper-box (instead of the predicate box) that produces output g = "1 = f(#f) halted" and then loops g forever, or if f does not halt, then outputs "0 = loop_forever" and halts g. Minsky 1967:148-9 shows 6 little drawings to exactly this effect. But this is what Enderton is calling the "self-reference approach" (p. 184). As Enderton notes: "...if this 'self-reference' construction seems too much like a magic trick, then there is a second way... the diagonalization approach ... which does not use an obvious self reference" (p. 186) [I'm not sure I'd agree with him on this last point]. Even if if the caption's argument were correct, the argument may not be convincing because it again relies on the self-denying event that occurs when g is input its own number. We know that the predicate 1∸g(#g) = 1∸U(#g,#g) inverts the result of g(#g) so that we still have to observe a contradiction, correct? . Bill Wvbailey 20:51, 4 December 2007 (UTC)

It isn't necessary to look at universal machines here, or the actual way in which functions are computed. The key point is that if it is possible to decide whether ${\displaystyle \phi _{n}(n)}$ exists, as a function of n, then the following function is computable:
${\displaystyle g(n)={\begin{cases}\phi _{n}(n)+1&{\mbox{if }}\phi _{n}(n){\mbox{ is defined}}\\0&{\mbox{if }}\phi _{n}(n){\mbox{ is not defined}}\end{cases}}}$
Since this function is computable, it must equal to ${\displaystyle \phi _{e}}$ for some e. This, if you check, leads to a contradiction. It isn't necessary to worry about how g is computed, only that it is, in fact, computable. — Carl (CBM · talk) 21:18, 4 December 2007 (UTC)
But the how (the intensional aspect), of course, is the whole crux of the issue concerning what an article on the "halting problem" ought to discuss. ISTM that to see the problem as relating to halting, it's necessary to refer to the behavior of some finitary process used for function-definition (TMs, mu-recursion, whatever). Aside from the how is the more-arid what of the mathematics, which the proof I posted below deliberately carried to an extreme. --r.e.s. (talk) 23:15, 4 December 2007 (UTC)
The article should discuss the relation to programs and algorithms, I agree. But it shouldn't take a viewpoint of the halting problem from 1936, ignoring all progress in computability theory in the meantime. — Carl (CBM · talk) 23:39, 4 December 2007 (UTC)
That's pretty funny! I sure wouldn't want to settle for the antiquated insight that Turing had about this problem. ;) --r.e.s. (talk) 01:07, 5 December 2007 (UTC)

#### drawing / figure

Do either of you believe that a drawing should accompany the article? Carl, as you can see from the 2nd drawing, I've come to accept the Soare proof and believe I understand it. I like it. But I sense, if that is the right word, that R.e.s. objects because it seems to dismiss the mechanical aspect w.r.t. "halting". Indeed I now see that the Soare proof is more abstract (more generalized, more broad in context) because no particular "process" does the enumerating, the enumerating just exists as a table (inside or outside g -- g just has to have access to it). So it can be created in any manner -- by recursion, Turing machine equivalent, lambda-calculus etc. The way I'm reading what's happened here is, the article's proof as it now stands, has adopted a Turing-equivalent process to do the enumeration and then compressed what I am calling Soare's PART A (the diagonalization demonstration of the non-existence of g(index_g)) and PART B that this causes a contradiction when one assumes that a halting function exists. In this compression something may have been lost.
Frankly, I think, and comments over the past two years since I came to this article (this was my first wiki article to work on), is that more than one explanation is required. I'd suggest that the second Soare proof be added. (I believe that Turing actually framed his first proof in the Soare way, dismissed it as too abstract and unconvincing, and then went on to frame it with an actual construction presented as a reductio ad absurdum).
I see no harm in putting in a bit too much for some folks -- e.g. an example of diagonalization table f(n)+1 -- if it helps the reader. Ditto for a drawing, if we can agree more or less on what it should contain. Comments? Thoughts? Bill Wvbailey (talk) 16:25, 6 December 2007 (UTC)
Although more than one explanation might be good for a class, I don't think it's appropriate for a wiki article. The more common question is whether we should include a proof at all. For this article I think the answer is clearly "yes". But that doesn't mean that we should include every possible proof (this is a recurring issue at the article on the incompleteness theorems, which needs pruning again as we speak).
I do think that the table illustrating the diagonalization method, as well as some text explaining how the proof can be viewed as a diagonalization, is worthwhile. I copied the table from the talk page, changed it to match the proof written by r.e.s., and added back the diagonalization text. Thoughts? — Carl (CBM · talk) 17:13, 6 December 2007 (UTC)
Carl, I'm happy that you re-inserted the diagonalisation paragraph (and the picture's nice — probably will help some folks). When I did the revision, I'd noticed that the diag. para. was still referring to a "contradiction" (a remnant of the previous proof), and didn't quite have the energy to rewrite it; I figured you would, and am glad to have guessed correctly ;).--r.e.s. (talk) 23:27, 6 December 2007 (UTC)
Bill, maybe you misread me a little bit, in the sense that some of what I wrote was in reaction to a perceived tendency to de-emphasise the essential role of "programs" and "halting", or their equivalents. I think two things a proof-sketch should emphasise are (1) that "computable" is understood as meaning "computable by a program (or its equivalent) in some Turing-complete model" — e.g. formulas in the mu-recursion model are also "programs" — and (2) that key steps in the proof (including those like Soare's) involve an appeal to how a function is computed. The present version does both of these things (in either incarnation as a direct or by-contradiction proof). Soare's could also be elaborated slightly to the same end, although I don't think the result would necessarily be an improvement. I hope the article itself does not get cluttered with multiple proofs that would overwhelm a reader. --r.e.s. (talk) 23:27, 6 December 2007 (UTC)
w.r.t. your first point: if a second proof were to be added I'd suggest a sub-article that moves all (i.e. three-four at most) good proofs there, and goes into deeper detail, perhaps proceeding in a historical progression. It could give (briefly) Turing's original version, then e.g. Davis's original halting proof ca 1956 (as I remember is really brief), maybe the one that is currently in the article, and then the best of the best, which could well be Soare's.
w.r.t. incompleteness theorems: I agree. The whole "package" of incompleteness-theorems articles seems sprawling, perhaps repetitive. I would suggest the main article be a general overview -- perhaps followed by a proof sketch or sketches -- that outlines the proofs' main notions and assumptions : e.g. an axiomatic system that contains enough arithmetic to concoct "add" and "multiply" and to express "This proposition is not provable" (liar's paradox) within its own constructs, "truth ≠ provable", the fact that G's proof has to do with symbol strings (aka proofs leading to theorems) that talk about their own content, Goedelization, where the diagonalization etc is hidden, etc, etc. Plus some history, examples of abuses, philosophic import etc. And then a more thorough treatment in the subarticles. or perhaps move all the proof-stuff to sub-articles.
w.r.t. I think the table looks good there. That diagonalization is at the heart of this is key ... with a caveat. In the Turing-mechanical sense, what exactly does happen when f calculates its own index? (Turing's version explains why it never makes it to its own diagonal number (because it has to recalculate all the previous numbers from scratch); what this means to the proof I now understand after coming to grips with the Soare proof and your elucidations of it.) "What really happens during a self-referential diagonal calculation/computation?" has got to be a question that comes up again and again. Bill Wvbailey (talk) 19:24, 6 December 2007 (UTC)
Are there more than two articles on the incompleteness theorems? I thought there was just the main article and the 'proof' article.
The question of what happens during a "self-referential" computation does come up, and I know I have seen something in print but I don't know where. The short answer is that some sort of infinite loop must occur. Exactly how this loop occurs can be determined in a particular instance by examining in detail exactly what is going on in the computation, especially if it proceeds by a universal Turing machine.
In the proof in the article, it's important to note that there is no assumption that the procedure that computes f tries to do so with a universal Turing machine. It could be that f does some other calculation (for example, it just checks whether the input is even or odd). No matter how f is computed, the mere fact that it is a total computable function is enough to conclude f is not the same as h. — Carl (CBM · talk) 19:39, 6 December 2007 (UTC)
There's a rather brief article under On Formally Undecidable Propositions of Principia Mathematica and Related Systems, the main article, and just one detailed article of the 1st incompleteness theorem. I thought there was a separate one for the 2nd theorem, but I was mistaken. That should be enough, don't you think? It's more the issue of the need for a frame around the content. Maybe a "ramifications" article could spin out of the main article, i.e. history, examples of abuse, philosophic import etc as its own article.
I agree about the universal machine. Correct me if I'm wrong: Really, the whole enterprise f could be in a TM's TABLE. The only reason to use a universal machine would be if f had to "simulate" the number-as-f's-index for some reason (as it did in Turing's proof). My conclusion a year or so ago would be that the "self-referential" calculation would indeed result in a loop of some sort. I even drew this up and posted Minsky-like drawings showing this (they're in the archives of this page but I'm not sure now they are correct). I would suppose one could actually simulate this behavior on a home-brew counter machine or Post-Turing machine with an "oracle":
Case 1:
MACHINE: "Will this computation halt?"
ORACLE: YES. Computation then into goes to a tight loop.
Case 2:
MACHINE: "Will this computation never halt?"
ORACLE: NO. Computation halts.
Fundamental assumption: ORACLE never ever lies, cannot lie, is god. So is the machine lying? We can see with our own eyes what it has done. And once in those states it cannot come out of them. Therefore the ORACLE is a liar, contradicting some assumption, somewhere. Perhaps spread over time the outcome is a different story: at time t1 ORACLE is correct (machine will indeed halt) but at time t1+ the machine becomes willful and deceitful and at t2 tricks ORACLE by going into a loop. What a mess.
I've never seen any discussion of this anywhere. Bill Wvbailey (talk) 22:03, 6 December 2007 (UTC)
The assumption in the proof is that the oracle is computable. Otherwise, the function that invokes the oracle won't be computable simpiciter but will only be computable relative to the oracle. — Carl (CBM · talk) 22:10, 6 December 2007 (UTC)

### continuing discussion with r.e.s.

Yes, using an enumeration of the computable functions would be my preferred way of making the proof precise, if that was necessary, but I don't think it is necessary. For comparison, here is the proof of the unsolvability of the halting problem from Soare (Recursively enumerable sets and degrees, 1987):

4.4. Proposition. K is not recursive.
Proof. If K had a recursive characteristic function χK then the following function would be recursive,

${\displaystyle f(x)={\begin{cases}\phi _{x}(x)+1&{\mbox{if }}x\in K,\\0\quad &{\mbox{if }}x\not \in K.\end{cases}}}$

However, f cannot be recursive since ${\displaystyle f\not =\phi _{x}}$ for any x.

While this would need slightly more elucidation in Wikipedia, it can't be said that Soare is not proving the theorem because he fails to refer to a model of computation. — Carl (CBM · talk) 19:13, 2 December 2007 (UTC)

That's a great example — although there *is* a particular model referenced there (according to the Soare article I mentioned, "recursive" in Soare 1987 *means* "definable by a TM"), your point, as I take it, is that only the extensional properties of the model are needed for the proof, and not its intensional properties. Apparently, this extensional vs. intensional issue is at the heart of disagreements that have been aired, as to who is the intended audience of the article. --r.e.s. 21:00, 2 December 2007 (UTC)
In answer to a question by Bill, I thought I would post this to help explain Soare's proof (I'm sure Carl will correct me if I get anything wrong). Here the set K is defined as
${\displaystyle K\ {\overset {\underset {\mathrm {def} }{}}{=}}\ \{x\in \mathbb {N} :\phi _{x}(x){\text{ is defined}}\}}$,
and the intensional version of the problem, i.e. in terms of the implicit model of computation (Turing machines), is the set
${\displaystyle K=\{x\in \mathbb {N} :T_{x}(x){\text{ halts}}\}}$;
that is, the corresponding decision problem is to decide, given an arbitrary number x in N, whether the xth Turing machine eventually halts when started with its own index x on the tape, where
${\displaystyle T_{1},T_{2},T_{3},\dots }$ is an enumeration of all TMs, and
${\displaystyle \forall {n}\in \mathbb {N} ,\phi _{n}{\text{ is the function computed by }}T_{n}.}$
--r.e.s. 23:28, 2 December 2007 (UTC)
Yes, that's right about K. The contrast of extensional vs. intensional seems like a reasonable way to understand the issue. Some formal model of computability (or something equivalent to one) is used to define the class of computable functions and verify some basic closure properties, but after that it isn't necessary to focus on the particular model of computation employed. Our article on computable function covers this. The current wording is a sort of compromise between the intensional and extensional versions, which is a sort of compromise I can accept. — Carl (CBM · talk) 02:37, 3 December 2007 (UTC)
Thanks for both your explanations. I will go away and think awhile about them. (And also learn about "connotation" versus "denotation".) Bill Wvbailey 03:55, 3 December 2007 (UTC)

Q about a small proof-detail: Soare simply states "If K had a recursive characteristic function χK then the following function would be recursive", but that statement requires proof — which would be easy enough if the intensional properties of his computational model (Turing machines) were used to simply construct a TM that computes f as required. But if this is supposed to be free of reliance upon intensional properties, how would that little detail be proved? (E.g., has Soare 1987 already stated, as background to the theorem, that his φi's constitute an acceptable programming system, or the like?) --r.e.s. 08:18, 3 December 2007 (UTC)
I don't doubt Soare said that the model there is Turing machines, because it's equivalent and he does cover them, but his actual model is in terms of μ recursive functions (def 2.4). The presentation then goes through the normal form theorem using Kleene's T predicate, after which Soare says "From now on we shall describe an algorithm for a recursive function in ordinary mathematical terms and leave it to the reader to convince himself that this algorithm could be translated into one of the formalisms above." In Wikipedia, it would be very reasonable to explain why the function defined in the proof above is recursive, perhaps by sketching a method in which it can be computed. — Carl (CBM · talk) 19:53, 3 December 2007 (UTC)

If one likes the idea of extracting from the set of computable functions just the properties that lead to a "halting problem", why not go a little further and expunge all references to "computable" or "partial recursive" functions as well...

Suppose S = (φ0, φ1, φ2, ... ) is a sequence of partial functions φn: N → N (n in N).

In the following, "function" means "partial function", and to save space in writing,
[Q: f | g] denotes the function (for given functions Q, f, g) such that for all x,
[Q: f | g](x) = f(x) if Q(x) is defined
= g(x) if Q(x) is not defined

Condition D:
if [Q: 1 | 0] is in S, then [Q: D | 0] is in S,
for some function D (in S) that differs from Q everywhere Q is defined.

E.g., D might be Q+1, or D might be UNDEF (a function with empty domain), etc.

Halting Theorem:
Under Condition D, the following function is not in S:
h ::= [λx.φx(x): 1 | 0].

Assume h is in S. Then, by Condition D, a function of the following form is in S:

f ::= [λx.φx(x): D | 0],

where D(x) differs from φx(x) for all x at which the latter is defined.

Then f = φe for some e, and exactly one of two cases must hold:
either
(1) f(e) = φe(e) is defined, in which case
f(e) ::= [λx.φx(x): D | 0](e) = D(e) (a contradiction),
or
(2) f(e) = φe(e) is undefined, in which case
f(e) ::= [λx.φx(x): D | 0](e) = 0 (a contradiction).

Therefore f is not in S, hence h is not in S. Q.E.D.


An interesting aspect of this formulation is that it seems to establish a variety of "halting problems" when the sequence is composed of certain subclasses of the computable functions (so the problems are decidable in the larger class, but not the smaller one). For example, if the single function UNDEF were added to the basis functions in the development of the primitive recursive functions, it seems the result would be a class of functions larger than the primitive recursive functions but smaller than the mu-recursive functions, with a halting problem that's unsolvable in the former, but not in the latter. --r.e.s. 19:30, 4 December 2007 (UTC)

The general argument is interesting, and it does show how similar results can be obtained for generalized recursion theories like hyperarithmetical theory. But I think it goes a little too far afield for this article, which should focus primarily on the halting problem for computable functions. The general argument also strays away from the presentations common in the literature. — Carl (CBM · talk) 21:04, 4 December 2007 (UTC)
It's probably more accurate to say the proof above is a summary of what's commonly in the literature, with assumptions minimised to an extreme. If the words "sequence of partial functions" in the opening sentence were changed to "enumeration of computable partial functions", then, with Q = λxx(x), the proof is virtually identical to Soare's with D = Q + 1, and to the even-more-common form (e.g. in Cutland) with D = UNDEF. Of course "Condition D" is the crucial thing that seemingly all authors rely on — the computability of one function implied by the computability another because of certain capabilities of the programs that compute them. --r.e.s. (talk) 23:43, 6 December 2007 (UTC)
OK, that's fair. I would replace the last phrase with because of closure properties of the set of computable functions, as you might suspect. — Carl (CBM · talk) 02:18, 7 December 2007 (UTC)

## Likebox edits

Likebox is, once again indulging his apparent fetish for the word 'Quine' and inserting all sorts of nonsense into articles related to computability theory. I've reverted him twice, I would ask other editors to please look at the edits and weigh in as I don't want to get into (another) edit war.

Zero sharp (talk) 21:05, 25 March 2008 (UTC)

You can't possibly be serious in thinking that this is nonsense.Likebox (talk) 21:10, 25 March 2008 (UTC)
I retract 'nonsense' as inaccurate and most importantly incivil. Sorry. But honestly, I am getting _really_ tired of your apparent crusade. It is distracting, unproductive and frustrating. If you don't like the standard, classical, accepted presentation of concepts related to computability/recursion theory then please, by all means, write a book about it. But stop trying to force your view through repeated tendentious edits to articles such as this and (I seriously hesitate even to mention it) the Incompleteness Theorems. Zero sharp (talk) 21:17, 25 March 2008 (UTC)
I wouldn't do it if I wasn't 100% certain that it was both mathematically OK, and also clearer. I would never be able to get a book published on this--- I have negative authority and zero standing.Likebox (talk) 22:18, 25 March 2008 (UTC)
It's not clearer, and it's only (marginally) mathematically OK if you accept Church's Thesis. — Arthur Rubin (talk) 22:24, 25 March 2008 (UTC)
Ok--- I can see where this is headed. As for the mathematical quibbling: Church's thesis is not necessary, you only need Church's thesis to argue that any algorithm can be turned into a computer program, and this is a very specific algorithm--- print your own code, feed it to HALT, do the opposite. HALT is already assumed to be a program.Likebox (talk) 22:45, 25 March 2008 (UTC)
The section "formal statement" is not the right place for a proof sketch. On the other hand, there is already a proof sketch in the appropriate section, and the long discussion still visible above shows no agreement for the idea to phrase that in terms of quines. Ther issue is that there is no strong reason to invoke the recursion theorem to prove the unsolvability of the halting problem, even for 0-ary functions. — Carl (CBM · talk) 21:14, 25 March 2008 (UTC)
Except I didn't give a proof sketch, I gave an inkling of an outline, and a redirect. It's designed to bypass all this ridiculousness, and let a person make up their own mind.Likebox (talk) 22:14, 25 March 2008 (UTC)
This is exactly my problem; irrespective of whether your presentation(s) are correct or clear; "all this ridiculousness" is howlingly POV. Zero sharp (talk) 22:26, 25 March 2008 (UTC)
What's the POV? That you can quine? You are right about consensus, I was just testing the waters, to see if you've all changed your minds yet. Alas, the answer is no.Likebox (talk) 22:45, 25 March 2008 (UTC)
No, the POV (as I suspect you know) is referring to the presentation as given as 'all this ridiculousness' -- as to your second comment about your re-inserting of the contested text despite a call for discussion first, I would refer you to WP:POINT.Zero sharp (talk) 23:03, 25 March 2008 (UTC)
I wasn't referring to the discussion in the article as "all this ridiculousness", I was referring to these types of discussions here as all this ridiculousness.Likebox (talk) 23:21, 25 March 2008 (UTC)

Yesterday's version had no proof sketch, outline, redirect, or other glimmer of proof in the "formal statement" section, and I don't see why one should be added to that section. On the other hand, we discussed the proof sketch section above, and there was a lack of agreement with the idea that using quines actually simplifies the proof. It's an extremely roundabout way to arrive at a proof that is just a simple diagonalization. — Carl (CBM · talk) 00:08, 26 March 2008 (UTC)

### Third opinion

Does a reliable source discuss quines in relation to the halting problem? If not, then we should probably not put forth those thoughts here. If so, there should be no reason the article cannot include one or two sentences about the matter, relating what the reputable references states. Vassyana (talk) 00:10, 26 March 2008 (UTC)

No, the standard textbook proof of the halting problem that can be found in numerous textbooks doesn't go through quines. That isn't to say that I can certify that no book whatsoever mentions them - but if any do, it's exceedingly uncommon, and not particularly due weight to mention them prominently here.
Actually, the term "quine" itself is uncommon in computability theory; they are mostly considered a novelty. The result that Likebox is citing is usually known as Kleene's recursion theorem; the program that Likebox is constructing is not actually a Quine (computing) at all (quines always halt). — Carl (CBM · talk) 00:18, 26 March 2008 (UTC)
I didn't mean to restart these endless debates--- they are not productive, and they lead to bad feelings. I hope this will be adressed in the future with a little blurb, but until then, peace.Likebox (talk) 20:39, 26 March 2008 (UTC)

## Halting Problem and machines with finite memory

Can someone explain why the halting problem is decidable for machines with finite memory in light of the fact that as long as the memory is large enough to hold the program e (from the proof of undecidability) then all the arguments from the proof still hold?

Let's say you feed the halting function the program e: h(e,e). Well then it would recurse until it ran out of memory. Because of this (in the finite state scenario) it would return 0. But then h(e,e)=0 and g(e)=0 and that is still a contradiction even with finite memory. Where am I going wrong with my logic? --116.76.162.2 (talk) 18:37, 7 April 2008 (UTC)

What it is saying is that if you know ahead of time the machine will never use more than X units of binary memory, you can decide whether it is going to halt. Suppose it has S states as well. Then there are only ${\displaystyle S\cdot 2^{X}}$ possible configurations of the machine (if the memory is binary). So if it runs for ${\displaystyle 1+S\cdot 2^{X}}$ units of time without halting, it must have been in the same configuration at least twice, by the pigeonhole principle, and so it is in an infinite loop. Thus ${\displaystyle S\cdot 2^{X}}$ is an upper bound of how long it could take to halt, which means you can decide whether it will halt.
What goes wrong in the proof of undecidability is that the function h is not assumed to run with a bounded amount of memory. — Carl (CBM · talk) 18:54, 7 April 2008 (UTC)

Hmm... okay. So to show that h is not computable wouldn't it be enough to show that h(e,e) is endlessly recursive (which is easier than the proof laid out in the article)? --218.18.203.94 (talk) 15:35, 8 April 2008 (UTC)

You can't easily prove h(e,e) is endlessly recursive because that assumes you already know how the machine e works. Naively, it is possible that there is some nonrecursive method of calculating h. The actual diagonalization proof doesn't make any assumptions about exactly how h is computed by e, it only assumes that e computes h somehow. — Carl (CBM · talk) 15:54, 8 April 2008 (UTC)

Do you think it's possible that there is a nonrecursive method for calculating h? Is it possible to prove either way? --218.18.71.10 (talk) 11:28, 9 April 2008 (UTC)

The proof (see the article here) shows that h cannot be computed by any program, regardless of how that program works. The proof doesn't assume anything about the program e except that it really is a program that computes h. — Carl (CBM · talk) 14:05, 9 April 2008 (UTC)
Besides the fact that the proof doesn't go through, it's straightforward to determine whether any Turing machine with a fixed, finite memory amount of memory halts: it has a finite (although exponential) number of possible machine configurations, and so it can only fail to halt by entering a loop; a universal machine simulating this machine could simply keep a list of all visited states to detect a loop, or simply run it for M + 1 steps, where M is the total number of configurations, then declare that it does not halt if it has not yet halted (since it must be visiting a state it has visited before at this point, by the pigeonhole principle). Dcoetzee 19:43, 25 June 2008 (UTC)

## "executable proof"

Today another code "implementation of the proof" was added to the article. In general, I don't think that these help the intended reader. One reason is that there are too many details of the program language that make the program hard to read; we don't want to assume that everyone who comes here is fluent in C++. In this case, the code even did nefarious things like cast a function pointer A second reason is that we already have a fully explained proof; if that isn't clear enough, let's improve it, but there is no need to duplicate it. — Carl (CBM · talk) 00:30, 25 April 2008 (UTC)

## pseudo-code proof

I visited this article about a year or so ago and it had a very simple proof by contradiction written in a pseudo-language. It has been replaced by a completely less intuitive almost brute force approach. I think at the very least, the simple version should be provided alongside this drawn out 'constructive' proof. —Preceding unsigned comment added by 131.107.0.73 (talkcontribs)

IIRC, it wasn't "valid" pseudo-code (he created his own language with quining as a primitive operation), and the pseudo-proof wasn't correct. You're welcome to add a valid proof-by-computation. — Arthur Rubin (talk) 17:52, 25 June 2008 (UTC)
I think Arthur is being far too generous; and I echo Carl's sentiments above. These so-called proofs do nothing to help the understanding of the average reader, and more often than not just serve to confound and obscure the issue even on the rare occasion when they're valid. But I'm willing to have my mind changed. Zero sharp (talk) 19:09, 25 June 2008 (UTC)
I think 131.107.0.73 means the version here: [1], with the "trouble" program. That proof was in terms of "algorithms" and "programs"; the only difference in the current article is that it is in terms of functions, as it would be in a contemporary published text. The function "trouble" in the linked version corresponds exactly to the function g in the current version. Another advantage of the current version is that it has a more professional tone, avoiding the large amount of connotation that comes with the use of the word "trouble" for a function. Nevertheless, it would be simple to add pseudocode for g, so I'll do so. — Carl (CBM · talk) 00:24, 26 June 2008 (UTC)
However, g is a mathematical function, not a program; so calling the program that computes it "function g" is sure to confuse the innocent. A better name would be something like "procedure compute_g", so I'll change it.
r.e.s. (talk) 03:59, 26 June 2008 (UTC)
That's a good idea. The motivation for reworking that section was precisely to handle the difference between computable functions and the programs that compute them. — Carl (CBM · talk) 11:14, 26 June 2008 (UTC)

## Section "Can humans solve the halting problem?" removed from article

I removed the section "Can humans solve the halting problem?" from the article for these reasons:

• The section cites no sources.
• The section seems to be original research.
• The section claims to prove that the halting problem cannot be solved by humans, and then as far as I can tell fails to prove it even informally.

The idea behind this section seems important, and has many implications: can humans continue to make progress on the halting problem indefinitely, if we are given enough (perhaps infinite) time? I support writing a new section on the topic of humans and the halting problem, hopefully including references to existing research. James Lednik (talk) 12:31, 22 August 2008 (UTC)

This is one of the major problems in the philosophy of mind. S.B. Cooper mentions it in his 2004 book (on computability). Of course nobody knows the answer, so besides stating the problem any discussion is bound to be purely speculative. VG 22:49, 21 September 2008 (UTC)
I support adding a small note in another section, or another small section, mentioning the question of whether humans can "solve" (whatever that means) the halting problem or related concepts. James Lednik (talk) 09:35, 3 October 2008 (UTC)
I also think this section is important, because the halting problem is frequently cited by proponents of weak AI as an example of how humans can solve problems that machines can't. Dcoetzee 16:11, 5 February 2009 (UTC)

Can any possible Human solve the halting problem? The proof can be written the same way as for software:

• listener(talker) is a person who listens to the talker and thinks about what he heard, but the listener may finish any time the listener chooses or never finish.
• MrKnowItAll(listener, talker) is "a person who is (or knows) a solution to the halting problem" because, in finite time, he always says true if listener(talker) would finish, and always says false if listener(talker) would not finish.
• MrRebel(MrNormal) is a person who refuses to think like anyone else and he knows that MrKnowItAll is always right. If you ask MrRebel about MrNormal, MrRebel would ask his friend MrKnowItAll if MrNormal would ever stop thinking about MrNormal. MrRebel always chooses to think about MrNormal for an opposite amount of time than MrNormal would think about MrNormal. If MrNormal would think about MrNormal forever, MrRebel always stops thinking about MrNormal. If MrNormal would stop thinking about MrNormal, MrRebel always thinks about MrNormal forever.
• You may think these people are insane and nobody can possibly think like that, and you would be right, but lets save that until the end of this proof.
• For "every possible person that is (or knows) a solution to the halting problem", we will call them MrKnowItAll and we will find a person who likes to disprove other people and cause him to think the same as MrRebel and believe that MrKnowItAll is always right.
• Ask MrRebel about MrRebel. If he would finish, MrKnowItAll(MrRebel,MrRebel) would say he would not finish.

If he would not finish, MrKnowItAll(MrRebel,MrRebel) would say he would finish.

• MrKnowItAll always answers incorrectly about at least 1 person (There is no shortage of rebels wanting to disprove him), therefore MrKnowItAll is not "a person who is (or knows) a solution to the halting problem".
• "every possible person that is (or knows) a solution to the halting problem" is not "a person who is (or knows) a solution to the halting problem".

BenRayfield (talk) 08:19, 7 February 2009 (UTC)

• I should have said listener(talker) may choose to listen to the talker or may only think about what the talker would say if asked to talk, and the listener is a skilled neuroscientist who can cut open the talker's brain (and any other parts of his mind you may think exist) and see every detail about it.

BenRayfield (talk) 08:36, 7 February 2009 (UTC)

## Why K?

Why is the halting set so often represented with K? Is it named after Godel's K={n:Rn(n) is not provable} (where Rn is the nth formula with one free variable in the language of arithmetic)? If so, does anyone have a reference for this so it could be included in the article? skeptical scientist (talk) 04:00, 20 October 2008 (UTC)

---

This isn't the answer you're after -- you're after a source or "first usage" -- but maybe it will spur someone else's recall. I looked at Turing's original paper, and he uses "K" as the "description number" of the testing machine "H". While executing "K", the tester machine H locks up in an infinite loop. The other place Turing uses bold-face K is to represent the "functional calculus" of Hilbert and Ackermann 1931: "I propose, therefore, to show that there can be no general process for determining whether a given formula U of the functional calculus K is provable ..." (p. 145 in The Undecidable). I went to Kleene 1952 p. 382 believed to be the first known statement of "the halting problem" and where Martin Davis says that he got the notion [see Archive #2 of this page for background on this]. "As another example, there is no algorithm for deciding wheter any given machine, when started from any given initial situation, eventually stops"(Kleene 1952: 382). But "K" is nowhere to be found. Nor is it to be found in any of Kleene's 1952 that I can see. Nor in Minsky's 1967 treatment of the "halting problem" etc (pp. 146-155). I don't have a cc of Davis's 1958 text where he first presented the "halting" proof by that name; I've seen it, but I didn't cc it and can't remember whether it had a K in it. Bill Wvbailey (talk) 15:35, 20 October 2008 (UTC)

## proof fails

The proof fails because the composite algorithm does not retain normal function if run itself... In this case an infinite regress occurs and it is undefined regardless. The proof seems to try and conceal this fact by refering to g as "program e" when what is going on is that we are presented with the function g(h(g(h(g(h(... , g(h(g(h(g(... ) which is undefined and does not require that h(g(..),g(..)) equal 0. —Preceding unsigned comment added by 96.32.188.25 (talkcontribs) 2008-11-12T14:10:44

The function g is not run "on itself", since g takes a natural number as input. What is shown is that for the particular natural number that encodes a program for g, the function g gives the wrong answer if it halts at all. There is no infinite regress of the form g(h(g(h(g(h(... , g(h(g(h(g(... ) in the proof; there is no requirement that g must run the program e at all. — Carl (CBM · talk) 16:56, 12 November 2008 (UTC)
There are different forms of the undecidability proof. The anonymous correspondent may be confusing it with another version. For an in-depth description of a proof that does get locked into an infinite regression -- Turing's original first proof -- see Turing's proof. The machine, executing an hypothesized decision-function that always yields a YES-NO outcome, attempts to compute its own diagonal number and gets locked into a loop, never succeeding to yield the YES-NO decision, contrary to hypothesis. Davis offers up a similar proof, as does Minsky. Bill Wvbailey (talk) 19:05, 23 November 2008 (UTC)

Incorrect. These algorithms are designed to take numerical encodings of other algorithms and therefore the fact that it takes natural numbers as input is irrelevant. That natural number can be an algorithm encoding that it knows exactly what to do with.

The fact that g involves an algorithm encoding taking only itself as input logically dictates that g involves copying the input which sets up for an infinite recursion that disables h's function only in this contrived setup. The recursion occurs if H makes an attempt to trace the computations of this algorithm it recieved as an input encoding.

If H did not make such an attempt, it would not need the input along with a machine encoding to run it on.

## historical importance

Article says:

The historical importance of the halting problem lies in the fact that it was one of the first problems to be proved undecidable. (Turing's proof went to press in May 1936, whereas Church's proof of the undecidability of a problem in the lambda calculus had already been published in April 1936.)

Is there some reason this paragraph doesn't mention Hilbert's second problem (the consistency of arithmetic) as undecidable per Gödel 1931? 67.122.210.149 (talk) 21:00, 25 November 2008 (UTC)

That's a completely different kind of "undecidable". The consistency of arithmetic is definitely decidable in the sense relevant here. Write two programs. One of them prints out "Peano arithmetic is consistent"; the other one says "Peano arithmetic is inconsistent". One of the two programs is correct; therefore the question is decidable (as is any question that requires only a finite number of answers).
I don't want to leave the impression that we don't actually know which program is right. We do, for any reasonable value of "know". We just can't prove it in Peano arithmetic itself. --Trovatore (talk) 21:09, 25 November 2008 (UTC)

## each proof has to be developed specifically for the algorithm at hand

This quote gives an irrelevant reason for halting being known for specific algorithms. The task of choosing a method of proof that works for a specific algorithm is done by the "computer scientist" and is subject to the same limits as the halting problem. The set of specific algorithms that the best possible "computer scientist" can never develop a proof for is same set of algorithms that the halting problem prevents from being proven.

"has to be developed" should be changed to "is usually developed".

Given a specific algorithm, one can often show that it must halt for any input, and in fact computer scientists often do just that as part of a correctness proof. But each proof has to be developed specifically for the algorithm at hand;

## This proof that the halting problem is unsolvable is simpler. Should it go in the article?

wouldHalt(p, i) = { Solution to the halting problem in finite time. Returns true if p(i) halts, else returns false }

unpredictableProgram(ignoresAllInput) = { if wouldHalt(unpredictableProgram,ignoreThis) then loopForever else halt }

For every possible program that solves the halting problem, name it wouldHalt and create the above unpredictableProgram. wouldHalt(unpredictableProgram) always gives the wrong answer, proving that the number of programs described by "every possible program that solves the halting problem" is 0. —Preceding unsigned comment added by BenRayfield (talkcontribs) 10:32, 5 February 2009 (UTC)

Actually, this is the same construction already in the article. Except you are making it more complicated by applying the recursion theorem. It isn't necessary for the unpredictableProgram to know its own index, it's only necessary that it has an index. — Carl (CBM · talk) 15:48, 5 February 2009 (UTC)

I see why that is more complicated and have rewritten it. This proof is easier to understand. Should it go in the article?

• wouldHalt(program, input) = { Solution to the halting problem in finite time. Returns true if program(input) halts, else returns false }
• rebelProgram(normProgram) = { if wouldHalt(normProgram,normProgram) then loopForever else halt }
• For "every possible program that solves the halting problem", name it wouldHalt and create the above rebelProgram. It is a rebel because it does the opposite of its input.
• If rebelProgram(rebelProgram) would halt, wouldHalt(rebelProgram,rebelProgram) always says it would not halt.
• If rebelProgram(rebelProgram) would not halt, wouldHalt(rebelProgram,rebelProgram) always says it would halt.
• wouldHalt always answers incorrectly for at least 1 program, therefore wouldHalt does not solve the halting problem, and as defined above, wouldHalt is a "solution to the halting problem".
• "every possible program that solves the halting problem" does not solve the halting problem.

BenRayfield (talk) 07:12, 7 February 2009 (UTC)

Now you have literally the same proof in the article. The program wouldHalt is the program that computes the function f, and rebelProgram computes the function g. The benefits of the way things are written in the article are:
• The proof in the article goes into slightly more detail, which is omitted by the bullets above.
• The proof in the article is careful to distinguish between computable functions and the programs that compute them.
• The names of the functions are simply f and g, which has a more professional tone than "rebelProgram"
— Carl (CBM · talk) 13:32, 8 February 2009 (UTC)
Ok its the same proof, but I still think the article should be more like my version.
• If 1 more person will understand my version than the article, that justifies a less professional tone. Words that give people intuition about the proof (like rebelProgram) should be used if the proof remains correct.
• The "more detail" could stay in the article, and only change the parts my version includes.
BenRayfield (talk) 01:29, 9 February 2009 (UTC)
Yes, of course. ... said: Rursus (bork²) 06:45, 9 February 2009 (UTC)
I'm not particularly fond of the unprofessional tone. The goal here is to be a reference work. The "1 more person" argument could be used equally well to remove the informal bits that are in the article now, so that people who are more familiar with computability theory would find the article easier to understand.
The text that is in the proof section at the moment is neither excessively formal nor particularly hard to understand. It does require actually reading it, but that's always going to be the case. Renaming the functions (which is what your proposal amounts to) would not make reading it significantly simpler to understand the diagonalization. — Carl (CBM · talk) 13:00, 9 February 2009 (UTC)

## Clarification needed

OK, so my marking clarification needed here is deleted immediatelly, because there is no explanation on the talk page. Now comes the motivation:

• this article is far too large, the halting problem, its (lack of) solution, and it's proof could be formulated very shortly and precisely, perhaps in a single paragraph, except nobody is going to immediately understand the answer. As much as I can see by this source (and my personal pattern matching), the halting problem is related to Gödel's incompleteness theorems because of using an "own-tail-biting technique used" for incompleteness theorems.
• as a reader, I immediately searched for the exact formulation of the problem, its outcome and its proof. I found the proof but not any formulation, nor any outcome. The exact formulation of the problem might sound like this:
Halting Question: could we design a method/algorithm/program that can always decide whether any arbitrary program stops or not given that we know exactly what input the program is going to be served?
Halting Answer: NO, or more precisely we cannot design a method/algorithm/program that can always decide whether an arbitrary program stops or not given that we know exactly what input the program is going to be served?
Answer Reason: because the program P that we are wishing to examine might actually call the program Q that decides whether it is stoppable, and if our program P calls Q and then decides to run an infinite loop if it gets the answer "YES: IT WILL STOP", then Q is bound to fail because P is designed so that Q shall always fail.
1. to author a comprehensive article, according to my opinionated opinion, one should always strive to say what one want to say as short and as precise as possible, without regard of the audience. This statement is the embryo, and it should be worked on until it is as comprehensible as possible without compromising the intent of it. Some sentences just cannot be reduced too much without loosing the meaning of it, especially as regards to logic. Therefore many post-formulation clarifications are needed, according to the tradition of mathematics and physics, a few introductory statements, such as "in the area of function theory" and "let T be temperature", are perfectly acceptable, and almost all human are able to push some sentences into their short term memory before overloading (~ "stack overflowing"), but not half an article.

So, ... (dramaturgical pause) ... needed: 1. Halting Question, 2, Halting Answer, 3. Answer reason (that is: proof), 4. the three previous should have a prominent position, they should be in the intro, they should be elaborated in the article, where Sketch of proof is a starter for the elaboration of the proof; the section formal statement is a very good place holder for a formal statement, but I cannot read a formal statement in the text, that seems to get the haltimg problem very wrong (again read this source!). Thank you for your patience, I'll be back! ... said: Rursus (bork²) 09:06, 9 February 2009 (UTC)

And actually, this sentence in the Formal statement section:
M: The halting problem is famous because it was one of the first problems proved undecidable — S: there is no algorithm which always completes and always answers the question correctly.
M for main clause and S for sub-clause: the subclause S has, as far as I can see, no connection to the halting problem, because that is not what the halting problem is about. The subclause hangs in the blue. ... said: Rursus (bork²) 11:39, 9 February 2009 (UTC)
Sorry: my Answer Reason is incorrect. I'll reformulate it to be nearer truth. ... said: Rursus (bork²) 12:54, 9 February 2009 (UTC)
The halting problem is famous because it was one of the first problems proven algorithmically undecidable. This means there is no algorithm which can be applied to any arbitrary program and input to decide whether the program stops when run with that input.
— Carl (CBM · talk) 12:56, 9 February 2009 (UTC)
By the way, the very first sentence of the article also has a precise formulation.
In computability theory, the halting problem is a decision problem which can be stated as follows: given a description of a program and a finite input, decide whether the program finishes running or will run forever, given that input.
And this is repeated at the top of the "formal statement" section:
The problem is to determine, given a program and an input to the program, whether the program will eventually halt when run with that input.
So if you cannot find a formal statement in the article, something is odd. The formulation at the bottom of the "formal statement" section is only included as an attempt to define what it means for the problem to be unsolvable; it is not included in an attempt to define the halting problem a second time. — Carl (CBM · talk) 13:06, 9 February 2009 (UTC)
Carl said:
"The halting problem is famous because it was one of the first problems proven algorithmically undecidable. This means there is no algorithm which can be applied to any arbitrary program and input to decide whether the program stops when run with that input."
That sounds like the halting problem, as far as I can tell. Sounds OK.
"By the way, the very first sentence of the article also has a precise formulation.".
Hmm, hmm, it is concise, but I can't grasp "a description". Now, when you pinpoint it, I realize it is pretty near my wanted problem formulation, maybe if I've read a little more carefully I would have seen it, but it might need some more scrutinizing...
"So if you cannot find a formal statement in the article, something is odd."
OK, something is "odd" here, and that's me and my method of scanning mathematically oriented articles. Foremost, my "oddness" is that I skim the text for the central formulation by discarding the easiest-to-understand elaborations as "noise" and try to immediately identify the toughest-to-understand sentences as the central point of the article, soak them up in my brain and let my brain fill in the missing pieces by itself. (It actually works pretty well, although I have to be critical towards my conclusions). I cannot decide on this, the article is maybe intelligible to me too if I read it a couple of times, but when skimming, the text is for me still a gray mass that is far too elaborate. I would wish the central points as I mentioned to be compacted and highlighted in front of the elaborate explanations, while I admit that the intro actually contains a version of the problem formulation. ... said: Rursus (bork²) 14:03, 9 February 2009 (UTC)

## Limiting autoanalysis

As far as I have seen the only argument that the halting problem is undecideable is the fact that the question asks "is the halting problem decideable over ALL turing machines". It would certainly be hard to use analyze a test program, T, that can run at the same time as the Halting program, H, with no security on the flags H has. If T could read H's flags that determine whether or not T would terminate T could easily act opposite to H's prediction and throw off program H. But if H and its flags were tools that only a compiler or IDE could access then T couldn't be programmed to falsify H's output. This machine wouldn't necessarily break the conclusion drawn by Turing since it works ONLY FOR THAT MACHINE and others that are based on that flag hiding. It is possible that one could make a machine that didn't hide these flags and the Halting problem would be undecideable for that machine. So long as H cannot call itself as input nor T can call H's flags I don't see how the arguments for undecideability can stand up for THAT SPECIFIC MACHINE. If anyone has a way to show a flaw in that argument please do so but assume that you can't simply find a backdoor in the machine itself because I could simply respond that I would redesign it unless you could prove that there was no way that I could safeguard program H from being called by a malicious program (and for the sake of intensive purposes let's say that we are not including physically messing with the machine) MicroControl (talk) 00:32, 7 March 2009 (UTC)

It's not an argument. The diagonal argument doesn't have programs "T" and "H" running on the same machine, we have (more-or-less) "H" emulating "T". If there isn't a "universal" program (a program on the machine which can emulate, step-by-step, any program on the machine), then the precise diagonal argument doesn't apply. Most people in the field consider such a machine "uninteresting". — Arthur Rubin (talk) 00:46, 7 March 2009 (UTC)
In practical terms, the problem is the sentence, "if H and its flags were tools that only a compiler or IDE could access". Because H is a program, it isn't necessary to access H in order to simulate H – you can just copy all of H's source code and all the source code for a virtual machine, and never tell the IDE that you are using that code to simulate H. This is why it isn't possible to safeguard any program P from every malicious program, because the malicious program could have all of P's source code copied inside of it. — Carl (CBM · talk) 00:58, 7 March 2009 (UTC)
The diagonal argument that a computer virus identification program cannot be absolutely reliable might be notable in that article, in that, given a virus detection program X, a "virus" V could be written which does virus-like things only if X fails to detect V. — Arthur Rubin (talk) 22:19, 7 March 2009 (UTC)

## Article Halting proof fails

- The proof is easily defeated by pointing out that we are assuming that a halting function couldn't just piecewise assign "halts" to itself instead of actually running on itself, which would preclude a function such as g() from being created. - - Also, Consider the following proof. - - Consider a set of Universal Turing Machines that at some point simulate their input machine. Call the set S. - - Now we construct another set of machines "G" that takes the encoding of an element of S and copies it, then simulates S on it's self. G is a subset of S. - - Therefore G can run on itself. - - G(G) infinitely loops as defined because it copies it's own input and simulates, then the simulation copies, then simulates, and that simulation copies then simulates etc. forever. - - Thus any function G of this sort can not be claimed to be able to do anything BUT loop when run on itself. - - Thus g() in the article proof is nonsense. This fact alone cannot be used to rule out the possibility of a Halting Algorithm, because the halting algorithm might just piecewise assign "halts" to itself. —Preceding unsigned comment added by 96.32.188.25 (talk) 14:09, 23 March 2009 (UTC)

In summary, g() cannot be created from h() because h returns "halts" when run on itself regardless of input. —Preceding unsigned comment added by 96.32.188.25 (talk) 14:46, 23 March 2009 (UTC)

The halting algorithm would have to make special cases not only for itself but for every other program that happens to compute the same function it does. It turns out, as a consequence of the Halting problem (see Rice's theorem) that this determination cannot be made effectively. — Carl (CBM · talk) 15:48, 23 March 2009 (UTC)
Also, because the proof in the article makes no assumptions about how f runs, you cannot find a flaw in the proof by hypothesizing some strange behavior of f. No matter how f is written, the proof here still applies to it (that's the point). To show that a proof is incorrect you must actually demonstrate that a particular step in the proof is incorrect. — Carl (CBM · talk) 15:52, 23 March 2009 (UTC)

It is important to recognize that implicit assumptions are still assumptions. F takes both an algorithm and the input to the algorithm as F's input. Because it needs the input to do it's work, it at least begins simulation to the degree necessary for this counter proof BY DEFINITION. The trick here is that the unique infinite loop problem is robust and doesn't depend on a very specific type of simulation to cause the problem. The fact that F needs the input of the simulated machine is enough - the second it tries to do something with that input when it is in the getup Turing describes, it ceases it's normal function and goes into an infinite loop. Furthermore the only time in practice the halting problem cannot be solved is when the only thing we can do to determine if an algorithm will halt or not IS TRACE (as in the case of an algorithm that halts depending on the distance between primes of greater and greater size)

The proof makes no assumptions at all about how f operates. The proof starts by allowing you to choose any computable function f that takes two arguments, and produces an example of an instance of the halting problem that f gets wrong. It makes no difference whatsoever how f tries to perform this computation. — Carl (CBM · talk) 13:59, 30 March 2009 (UTC)
In reality, it does. If f traces, then the one and only reason for the contradiction arrived at is that you have a convaluted setup that necessarily runs forever - not because it is a halting machine - but because it copies and traces, and during the trace it copies and then traces, and during that trace it .... ad infinitum.

## Has it been noted?

Formally a Turing machine M is a tuple (Σ,Γ, Q, δ) where Σ,Γ,Q are finite nonempty sets with Σ Ç Γ and b G Γ − Σ. The state set Q contains three special states q0,qaccept,qreject. The transition function δ satisfiesδ : (Q − {qaccept,qreject}) × Γ → Q × Γ × {−1,1}If δ(q, s)=(q ,s ,h) the interpretation is that if M is in state q scanning the symbol s then q is the new state, s is the symbol printed, and the tape head moves left or right one square depending on whether h is -1 or 1.We assume that the sets Q and Γ are disjoint.A configuration of M is a string xqy with x, y G Γ∗, y not the empty string,and q G Q.The interpretation of the configuration xqy is that M is in state q with xy on its tape, with its head scanning the left-most symbol of y.If C and C are configurations, then CM→ C if C = xqsy and δ(q, s) =(q ,s ,h) and one of the following holds:C = xs q y and h = 1 and y is nonempty.

C = xs q b and h = 1 and y is empty.C = x q as y and h = −1 and x = x a for some a G Γ.C = q bs y and h = −1 and x is empty.A configuration xqy is halting if q G {qaccept,qreject}. Note that for each non-halting configuration C there is a unique configuration C such that CM→ C .The computation of M on input w G Σ∗is the unique sequence C0,C1, ... of configurations such that C0 = q0w (or C0 = q0b if w is empty) and CiM→ Ci+1for each i with Ci+1 in the computation, and either the sequence is infinite or it ends in a halting configuration. If the computation is finite, then the number of steps is one less than the number of configurations; otherwise the number of steps is infinite. We say that M accepts w iff the computation is finite and the final configuration contains the state qaccept.

--Musatov

## Problematic sentences

Hi,

I find this language a bit problematic:

Given a specific algorithm, one can often show that it must halt for any input, and in fact computer scientists often do just that as part of a correctness proof. But each proof has to be developed specifically for the algorithm at hand; there is no mechanical, general way to determine whether algorithms on a Turing machine halt. However, there are some heuristics that can be used in an automated fashion to attempt to construct a proof, which succeed frequently on typical programs. This field of research is known as automated termination analysis.

Namely, there is a mechanical way to prove the termination for exactly those algorithms that can be proven by such a (finite-length) mathematical proof: Generate all such proofs up to a given length (the limit can be set arbitrarily high, higher than any human could hope to do) and check them for correctness. If a proof has been found, the algorithm does indeed terminate; if not, no such proof exists and could not been found by a human either. Hence a human is not "stronger" in this sense than a purely mechanical Turing machine.

Also WTF does the "See also: Watchdog timer" do here :D --SLi (talk) 21:46, 30 March 2009 (UTC)

The quote simply says, "there is no mechanical, general way to determine whether algorithms on a Turing machine halt". It is true that if the computation does halt you can tell this; but if it does not halt there is no general way to determine so by mechanical means (it may not be provable from a given set of axioms that the program does not halt). The quote above does not say there is no mechanical way to search for a proof that a program halts. — Carl (CBM · talk) 22:04, 30 March 2009 (UTC)
Somehow it seems like it says that there may be a "specific" way of proving some algorithm halts, while my argument is that if there is such a proof that a given Turing machine always halts, it can be found by an entirely general algorithm, not "specific" to the "algorithm at hand" (and certainly not only by a computer scientist). That's what disturbs me in this language. --SLi (talk) 22:27, 30 March 2009 (UTC)
Your argument shows how to find the proof, but the proof that your argument finds will still be specific to the algorithm in question (although this "specificity" may seem trivial). However, when computer scientists actually prove that some algorithm always terminates, they don't use an exhaustive search for a proof, they use some ad hoc argument, maybe involving loop invariants or something like that. Maybe the "has to be" could be changed to "is". — Carl (CBM · talk) 22:35, 30 March 2009 (UTC)
Yes, the proof is specific to the algorithm, unless you consider it a proof that the Turing machine that computes and checks those proofs terminates when given the algorithm. However if interpreted that way, the second part of "But each proof has to be developed specifically for the algorithm at hand; there is no mechanical, general way to determine whether algorithms on a Turing machine halt" seems, while true, misleading when combined with the first part, since theoretically the developing of the proof can be done entirely mechanically. --SLi (talk) 22:47, 30 March 2009 (UTC)
It's not completely clear to me that an exhaustive search for a proof counts as "developing" the proof. In any case, when people in computer science prove that algorithms terminate, they do not do so by making an exhaustive search for a proof (people don't prove mathematical theorems that way, either). The usual CS way of proving algorithms terminate is to do an ad hoc analysis of the algorithm at hand and write a custom proof for that situation. So if you want to change "has to" to "is", that seems OK.
But I don't believe the current language is actually inaccurate, since it could be argued that the exhaustive search does count as "developing" a proof, and nobody does exhaustive search anyway. And the text does not claim that it is impossible to search for a proof, only that there is no mechanical way to separate halting programs from non-halting programs. — Carl (CBM · talk) 01:39, 31 March 2009 (UTC)
Of course exhaustive search is not how it's done in reality, but then people don't write their algorithms in Turing machines either. Fast or slow is so far removed from the topic of this article, it's mostly just about whether it can be done in finite time by a Turing machine or not (especially since we are talking about the halting problem). The Turing machine model is so far from actual reality that I'm afraid bringing in people proving that an algorithm always halts in such a context only serves to confuse the reader into thinking it's something that a person can do but a Turing machine, as a computational model (not as a concrete machine) cannot.
I'll try to think if I could come up with some less confusing language. --SLi (talk) 01:49, 31 March 2009 (UTC)