Jump to content

Talk:Gödel's incompleteness theorems

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 92.39.205.210 (talk) at 14:53, 16 July 2009 (clarify tag). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

This is the talk page for discussing improvements to the Gödel's incompleteness theorems article. Please place discussions on the underlying mathematical issues on the Arguments page. Non-editorial comments on this talk page may be removed by other editors.

Article policies
Archives: Index1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
WikiProject iconMathematics B‑class Top‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
BThis article has been rated as B-class on Wikipedia's content assessment scale.
TopThis article has been rated as Top-priority on the project's priority scale.
WikiProject iconPhilosophy: Epistemology / Logic Unassessed Mid‑importance
WikiProject iconThis article is within the scope of WikiProject Philosophy, a collaborative effort to improve the coverage of content related to philosophy on Wikipedia. If you would like to support the project, please visit the project page, where you can get more details on how you can help, and where you can join the general discussion about philosophy content on Wikipedia.
???This article has not yet received a rating on Wikipedia's content assessment scale.
MidThis article has been rated as Mid-importance on the project's importance scale.
Associated task forces:
Taskforce icon
Epistemology
Taskforce icon
Logic

Recursion sucx?

By computer science jargon, the theorem says: Recursion sucx!. But we knew that! Said: Rursus 10:59, 4 August 2008 (UTC)[reply]

A criticism of this article (from Usenet)

The following criticism of this article was posted on Usenet. The last edit made by Aatu was apparently on 15 July 2005.

 From: Chris Menzel <...@...>
 Newsgroups: sci.logic,sci.math
 Subject: Re: the problem with Cantor
 Date: Mon, 11 Aug 2008 17:49:38 +0000 (UTC)
 On Mon, 11 Aug 2008 09:05:20 -0700 (PDT), MoeBlee <...@...>
 said:
 > ...
 > My point stands: Wikipedia is LOUSY as a BASIS for study of this
 > subject....
 There are some good articles, of course, but a big problem is that the
 entries are a moving target because of Wikipedia's more or less open
 revisability policy.  Case in point, a few years ago Aatu did a very
 nice job of cleaning up the first few sections of the terrible entry on
 Gödel's incompleteness theorems.  But a number of the problems in the
 article that Aatu had fixed seem to have been reinstroduced.  The
 current statement of the first theorem is now both wordy and inaccurate
 (notably, the statement of the theorem in the first sentence doesn't
 rule out the possibility of a complete but unsound theory).  The article
 also includes both incompleteness theorems as stated in Gödel's original
 paper which, taken out of context, are utterly opaque to the ordinary
 reader.  Aatu's informative section on Gentzen's theorem has been elided
 in favor of a link to an article written, apparently, by someone else.
 And, bizarrely, there is now a paragraph mentioning an extension of the
 first theorem to "a particular paraconsistent logic" -- as if that is
 essential information.
 It appears to me that someone has simply taken on the mantle of Owner
 and Guardian of the article and is vigorously monitoring and
 "correcting" its content.  This could work if the person in question
 were a genuine expert, but the O&G in this case, unfortunately, appears
 to be only marginally competent.  This sort of thing really does make
 Wikipedia a terribly unreliable and unsystematic source of information.

207.172.220.155 (talk) 07:22, 12 August 2008 (UTC)[reply]


If Aatu or Menzel would like to help write the article, they would be very welcome. In the meantime, their criticism will be considered when the article is edited.
I tend to agree that Goedel's original statement shouldn't be included. The remainder of the complaint seems to be more personal preferences - to include a longer discussion of Gentzen's result in this article, not to include the one sentence on Hewitt's result.
It's hard to see what Menzel means by "The current statement of the first theorem is now both wordy and inaccurate (notably, the statement of the theorem in the first sentence doesn't rule out the possibility of a complete but unsound theory" - the statement explicitly says the theory must be consistent. Moreover, it's hard to see how to make the statement much shorter. — Carl (CBM · talk) 11:31, 12 August 2008 (UTC)[reply]
I think the point about the "complete but unsound" part is that the current statement doesn't exclude the following situation: There is a true statement that the theory doesn't prove, because the theory proves, rather, the (false) negation of the statement. This is a fair point; the theorem does exclude that situation, and we don't currently say that it does. I don't think it's extremely important to the foundational impact of the theorem (who wants to found mathematics on a theory that proves falsehoods?) but it is strictly speaking part of the theorem.
Most of the discussion around this point has been objections to calling the sentence true, which stem mostly from misunderstandings (you can certainly take the view that the sentence doesn't have a truth value, but it's hard to see then how you would assign a truth value to the hypotheses). This is an entirely different point and I'm not entirely sure how to respond to it. It would be nice to keep the point that the sentence is true, precisely as a counter to the muddled presentations by the popularizers. --Trovatore (talk) 17:36, 12 August 2008 (UTC)[reply]
What do you make of the sentence "That is, any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete."? It may be that the "that is" is not quite the right choice of words, since the quoted sentence is strictly speaking stronger than the one before it. But the quoted sentence does seem to exclude the possibility that the theory is complete... Maybe we can come up with a better overall phrasing of the theorem; that would be a positive outcome to this discussion. — Carl (CBM · talk) 01:54, 13 August 2008 (UTC)[reply]
Hmm, maybe. Maybe just drop "that is" and append the sentence? --Trovatore (talk) 02:17, 14 August 2008 (UTC)[reply]
I'd refer Mr. Menzel (assuming he deigns to read Wikipedia at all) to WP:SOFIXIT.Zero sharp (talk) 15:00, 12 August 2008 (UTC)[reply]
> Please remember folks: this article is not for academic mathematicians -- you folks already know this stuff -- it's for the rest of humanity. (Carl: you and I have been around the original-statement issue once before, I believe. Forgive me while I repeat myself ...). I inserted Goedel's original statements of his theorems only because I when came to this article as a novice in search of some illumination, I found, as I compared his paper (in The Undecidable), that "the first and second" incompleteness theorems are anything but. I had to actually (sort of) read the paper as best as I could to even figure out which theorems are the supposed "first and second theorems". Maybe common academic parlance calls them this, but I want the article to clarify the fact that for someone confronting the papers, they will need to understand that the paper consists of a sequence of theorems after a rather clear but extremely terse presentation of axioms and then a lengthy and scary sequence of definitions with frightening names. (But not so scary and frightening once they've been explained ....) The theorems' opaqueness is illuminating; their original wording and their subtlety does serve as a caution to the newbie/novice that there's a lot going on here. Perhaps as stated by Goedel the theorms could appear in footnotes -- footnotes can contain expansion material, not just references. I'll reiterate that it continues to drive me (a non-academic) and others nuts (see Talk:Busy beaver ) that the really important mathematics wikipedia articles seem to be written by academics and grad students writing mostly for themselves, at their level of understanding, and they are writing for no one else. As far as I'm concerned most of the value in this article as it is currently written is to be found in the references and my clarification that if you go hunting in the original for a "first" and a "second" incompleteness theorem you're going to be disappointed. The whole article could be about 40 words: "This stuff is hard. You will need some background knowledge. For a good introduction read Nagel and Newman's book. Everybody else: read the original as presented best in van Heijenoort. The meanings of the strange (to-english) abbreviations can be found in Meltzer and Braithewhite" Bill Wvbailey (talk) 17:27, 12 August 2008 (UTC)[reply]
Will, are you seriously arguing that the most approachable presentation for non-academics is one written in Gödel's original notation??? --Trovatore (talk) 02:19, 14 August 2008 (UTC)[reply]
No, of course not . . . an example of Goedel's (awesomely-difficult) original notation is simply a sampler of the true "taste" of the paper (not unlike like the taste of a very difficult Scotch whiskey (e.g. Lagavulin) . . . wow! whew!). For those who don't have a cc of the original paper, the original notation represents a (cautionary, challenging, fun) taste/flavor of what the original contains. And that is the point! That's part of the "educational experience"! With regards to a superb explication in Wikipedia I propose that really talented editor/s (you, Carl, etc) can craft a presentation that a kid in US high-school senior math could grab. That's a hell of a challenge, I realize ... but I think after a careful presentation of the notion of "Peano axioms" and "[primitive] recursion" and "Goedel numbering" and "diagonalization" such an explication is possible. Bill Wvbailey (talk) 02:56, 14 August 2008 (UTC)[reply]
A qualification to the above: My experience with difficult subjects is that, more often than not, the author's original explication/presentation is the best, but it needs to be supplemented with a modern, simplifying 'trot' -- an "Illustrated Classics" and/or a "Cliff's Notes". But this idea is complicated by the fact that Goedel evolved a simpler presentation. His IAS (Princeton 1934) lectures modified, and simplified, his proofs for his English-speaking audience. (For example he reduces his 46 definitions to 17, removes the scary German abbreviations, and actually defines what the 17 mean, he changes his Goedelization method, etc) There's still a lot of spooky symbols and the typography in The Undecidable is off-putting, but here Goedel quickly and clearly defines a "formal system", the notion of "completeness", expands the relationship of his proof to the Epimenides (Liar) paradox, etc. Eventually (in his June 3, 1964 "Postscriptum" to the IAS lectures) he embraced the "Turing machine": "Turing's work gives an analysis of the concept of "mechanical procedure" (alias "algorithm" or "computationa procedure" or "finite combinatorial procedure"). This concept is shown to be equivalent with that of a "Turing machine". A formal system can simply be defined to be any mechanical procedure for producing formulas, called provable formulas. . . . Moreover, the two incompletness results mentioned . . . can now be proved in the definitive form . . . 'There is no algorithm for deciding relations in which both + and x occur'" (p. 72). It would be very interesting to see how, in 2008, Goedel would present, and explain, his proofs. Bill Wvbailey (talk) 11:46, 14 August 2008 (UTC)[reply]


I rearranged the statement of the first theorem, which removed the "that is" and made it more clear which statement is more general. I'm planning to work on the article in a little more detail over the next couple days. — Carl (CBM · talk) 12:55, 18 August 2008 (UTC)[reply]

A very simple paraconsistent proof

There is a very simple paraconsistent proof in:

http://hewitt-seminars.blogspot.com/2008/04/common-sense-for-concurrency-and-strong.html

--67.180.248.94 (talk) 06:05, 14 August 2008 (UTC)[reply]

I removed the Hewitt paraconsistency stuff from the article on the grounds that it is cited to a non-peer-reviewed source and that Carl Hewitt is banned from inserting references to his own work on Wikipedia. It was probably inserted by Hewitt. See further discussion at:

76.197.56.242 (talk) 02:44, 17 August 2008 (UTC)[reply]

Poking around the Web, it turns out that the results were published in a refereed journal:

  • Hewitt, Carl (2008). "Large-scale Organizational Computing requires Unstratified Reflection and Strong Paraconsistency". Coordination, Organizations, Institutions, and Norms in Agent Systems III. {{cite web}}: External link in |work= (help) Jaime Sichman, Pablo Noriega, Julian Padget and Sascha Ossowski (ed.). Springer-Verlag.

--67.169.145.52 (talk) 03:30, 17 August 2008 (UTC)[reply]

  1. That looks like a conference proceedings, not a refereed journal. See the comments at the "Direct logic" AfD linked above for discussion of this point as applied to this topic.
Unfortunately, you are incorrect, it is was refereed.--98.97.104.6 (talk) 18:51, 17 August 2008 (UTC)[reply]
  1. Even if it's in a journal, the linked article appears to have no relevance to Gödel's incompleteness theorems, which are theorems of classical logic.
Unfortunately, you are incorrect. Incompleteness theorems also apply to non-classical logics.--98.97.104.6 (talk) 18:51, 17 August 2008 (UTC)[reply]
  1. It also appears to have very limited notability in that Google Scholar finds no citations to it of any sort. If some appear in the future, we can consider it for Wikipedia after that happens.
Unfortunately, Google Scholar tends to be both spotty and tardy in its indexing. Consequently, it doesn't indicate much of anything in academic work. On the other hand, the regular Google index has lots of references to the work.--98.97.104.6 (talk) 18:51, 17 August 2008 (UTC)[reply]

So, based on both verifiability and WP:Undue weight, I don't think the reference should be restored. 76.197.56.242 (talk) 07:32, 17 August 2008 (UTC)[reply]

The paper has become quite famous. The results have been presented in lectures at AAMAS'07, Edinburgh, Stanford (on at least 3 occasions), AAAI (twice), and no doubt others.--98.97.104.6 (talk) 18:51, 17 August 2008 (UTC)[reply]

My general opinion is that the proof warrants a very short exposition here - shorter than what was removed. Since the goal is only to give a pointer to the paper, not explain it at all, it's OK if we leave the text here somewhat vague. — Carl (CBM · talk) 12:53, 18 August 2008 (UTC)[reply]

Yes, shorter is better. What was removed is the following:

Recent work by Carl Hewitt[1] extends Gödel's argument to show that, in a particular paraconsistent logic (that is outside the restrictions on self-reference of Gödel's framework[2]), sufficiently strong paraconsistent theories are able to prove their own Gödel sentences, and are thus inconsistent about the provability of the Gödel sentence (as well as being incomplete). Here incomplete means that it is paraconsistently provable that neither the Gödel sentence nor its negation can be proved. In this context, questions about the truth of the Gödel sentence do not make any sense.

The key points seem to be the following:

  1. A very short intuitive paraconsistent formal proof of both incompleteness and inconsistency for this kind of reflective logic.
  2. The thesis that the whole notion of "truth" is out the window for reflective logic of the kind envisioned by Feferman.

Would like to try your hand at drafting something shorter?

--12.155.21.10 (talk) 14:44, 18 August 2008 (UTC)[reply]

Yes, I'll do that later, along with the section on Boolos' proof. — Carl (CBM · talk) 14:58, 18 August 2008 (UTC)[reply]
I added a very short summary. I'm sure the wording can be improved, but I don't think more detail is necessary. — Carl (CBM · talk) 16:46, 20 August 2008 (UTC)[reply]

This is a good contribution. However, this business about the Godel sentence being "true" is still a bit mysterious. The article states:

for every theory T satisfying the hypotheses, it is a theorem of Peano Arithmetic that Con(T)→GT
where Con(T) is the natural formalization of the claim "T is consistent", and GT is the Gödel sentence for T.

Consider the theory PA+Con that is Peano Artithmetic + the axiom Con(PA+Con) that PA+Con is consistent. Then, from the above,

it is a theorem of Peano Arithmetic that Con(PA+Con)→GPA+Con

therefore since PA+Con is an extension of PA:

it is a theorem of PA+Con that Con(PA+Con)→GPA+Con"

Now, since Con(PA+Con) is an axiom of PA+Con, it kind of looks like PA+Con can prove its own Gödel sentence.--216.1.176.121 (talk) 21:56, 20 August 2008 (UTC)[reply]

Writing T for your PA+Con, you seem to want to make Con(T) an axiom of T. How are you going to do that, exactly? In the usual way of treating these things, you don't know what Con(T) is until you've already fixed T, so you can't really make it an axiom of T. --Trovatore (talk) 02:23, 21 August 2008 (UTC)[reply]
Con(PA+Con) says that an inconsistency cannot be derived in PA+Con even using the proposition that PA+Con is consistent. So PA+Con is fixed.--12.155.21.10 (talk) 15:49, 21 August 2008 (UTC)[reply]
Your use of "Con" is confusing me. Starting with PA, we may form GPA. Let's say R is the theory PA + GPA. This theory does not include the axiom GR. If you did somehow construct a theory which did include its own Gödel sentence, that would mean you had constructed an inconsistent theory. But that won't be the particular theory R.
It may be easier to see why PA proves Con(T) → GT if you look at it in contrapositive form: if GT fails, that means Bew(GT) holds, so we want to prove Bew(GT) implies ~Con(T). This is exactly what is obtained by formalizing the proof of the incompleteness theorem in PA, replacing "provable" with Bew throughout. — Carl (CBM · talk) 16:34, 21 August 2008 (UTC)[reply]
Since "it is a theorem of PA+Con that Con(PA+Con)→GPA+Con" and "Con(PA+Con) is an axiom of PA+Con", PA+Con proves its own Gödel sentence and is indeed inconsistent.--98.210.236.229 (talk) 23:32, 22 August 2008 (UTC)[reply]
As Trovatore and I have pointed out, it's not clear at all what the theory you call "PA + Con" is supposed to be. It certainly isn't PA + GPA. What is it, exactly? — Carl (CBM · talk) 23:38, 22 August 2008 (UTC)[reply]
PA+Con is defined by a fixed point construction in much the same was as the Gödelian sentence. It basically is PA plus the axiom "I am consistent" analogous to GPA (I am not provable in PA). Thus you are correct that PA+Con is not the same as PA+GPA --67.169.145.32 (talk) 06:01, 23 August 2008 (UTC)[reply]
So it is true that the process of adjoining Con(T) to an r.e. theory T has a fixed point. As an example, just let every sentence in the language of arithmetic be an axiom of T. This is an r.e. theory, and you can formulate Con(T) in some standard way based on the (simple!) algorithm that generates its axioms, and you will find out, in the end, that Con(T) was in fact an axiom of T all along.
But of course this T is inconsistent, and so is every fixed point of the process.
By the way, you can't get to such a fixed point via iteration starting with PA. PA is consistent, and so is PA+Con(PA), and so is PA+Con(PA)+Con(PA+Con(PA)), and so on, through not only all finite iterations, but into the transfinite, as far as you can actually make sense of it. Therefore none of these theories contains its own consistency as an axiom. --Trovatore (talk) 07:34, 23 August 2008 (UTC)[reply]
So this is quite interesting:
If a theory T has an axiom that it consistent, then T is inconsistent!
--67.169.9.72 (talk) 22:52, 23 August 2008 (UTC)[reply]
Yes. This follows easily from the second incompleteness theorem. --Trovatore (talk) 03:15, 24 August 2008 (UTC)[reply]
Con(PA+Con) is intuitively true. But adding it as an axiom classically causes the wheels to come off. So we might as well throw in the towel (i.e. admit the inherent inconsistency) and go over to paraconsistent logic.--67.169.9.72 (talk) 13:05, 24 August 2008 (UTC)[reply]
That's one way of looking at it, but certainly not the "standard" one. Apart from Hilbert's program, what's the reason to expect that a formal theory of arithmetic will prove its own consistency? Experience has shown there's no real difficulty for mathematicians if the theories they use to study arithmetic, set theory, etc. don't prove their own consistency, since the consistency can be studied by other means, and the questions studied within the theories are typically mathematical rather than metamathematical. — Carl (CBM · talk) 16:58, 24 August 2008 (UTC)[reply]
Look, you can't "add Con(PA+Con)", because there's no such thing as PA+Con. You haven't told us, in any useful way, what it's supposed to be in the first place. There are indeed fixed points of the process of adding a theory's consistency to the theory; I gave an example, but it's not in any reasonable way describable as starting with PA and adding the axiom "I am consistent". --Trovatore (talk) 19:14, 24 August 2008 (UTC)[reply]

I reworded the footnote and added references throughout. Please feel free to keep rewording it if I've missed any subtle points. — Carl (CBM · talk) 20:12, 21 August 2008 (UTC)[reply]

I think that's nicely done—it's clearer and more direct than my original wording, which was more aimed at being a bulletproof response to the quibblers than at explaining what was really going on. Referencing Torkel is also a nice touch. --Trovatore (talk) 20:17, 21 August 2008 (UTC)[reply]

Moved 'implications' section to the end

I moved the section 'Discussion and implications' to the end, after the sections that discuss the proofs, etc. I think that attempts to apply G.I.T. outside of mathematics, mathematical logic and philosophy of mathematics are usually borderline nonsense, but that is my opinion; irrespective of this, they are relevant and notable w.r.t the discussion of the theorems and _do_ belong in this article. But, I think they belong at the end once the more germane (again, this is my opinion (not _just_ mine though) ) topics. Zero sharp (talk) 03:16, 18 August 2008 (UTC)[reply]

While we're at it, it would be nice to tighten up the referencing in that section. There are several half-done footnotes that should be replaced by complete references. Does anyone have an interest in that area? — Carl (CBM · talk) 12:52, 18 August 2008 (UTC)[reply]

Boolos' short proof

Later today, once I obtain the paper from the library again, I am going to severely trim the section on "Boolos' short proof". When I looked at his paper before, I found Boolos' proof somewhat cute because it uses Berry's paradox instead of the liar paradox, and I think it's worth a mention here.

On the other hand, I don't think that it warrants a long section here - just one paragraph would be fine. The current text is also confused about the proof, possibly because Boolos wrote it up in a short sketch suitable for experts. The current text claims that the proof doesn't mention recursive functions, but does mention algorithms (!); that the proof uses Goedel numbers but assumes no number theory (!); and has other similar problems. A complete exposition of Boolos' proof, which carefully proved all the facts in such a way to make it clear which theories it applies to, would have about the same level of technical detail as any other modern proof of the first incompleteness theorem. — Carl (CBM · talk) 12:43, 18 August 2008 (UTC)[reply]

section on meaning of the first incompleteness theorem

I moved this section earlier in the article. It is written in a very accessible, friendly way, unlike our section on the second theorem. I think that the "meaning" section complements the first theorem better, and moving it up will encourage readers not to give up when they reach the sec. on the second theorem (assuming they read an initial segment of the article).

The "meaning" section does need copyediting for tone - it has a lot of "we" and "you". The section on the second incompleteness theorem needs editing for structure - perhaps some subsections. — Carl (CBM · talk) 14:08, 18 August 2008 (UTC)[reply]

I also think the section on limitations of Goedel's theorems could be merged into the "meaning" section. — Carl (CBM · talk) 14:12, 18 August 2008 (UTC)[reply]

Good work, CBM et al. I think the article is beginning to anticipate and address the difficulties that laymen like me have in trying to understand this work (like Brian Greene manages to do so well in his popular books on physics). --Vibritannia (talk) 16:17, 22 August 2008 (UTC)[reply]

Testing the waters

Just checking to see if you folks are Ok with a computational proof now.Likebox (talk) 20:56, 24 September 2008 (UTC)[reply]

NO! —Preceding unsigned comment added by 71.139.24.118 (talk) 02:04, 25 September 2008 (UTC)[reply]
What has changed since the last time this was discussed? The (long) discussion starts at Talk:Gödel's_incompleteness_theorems/Archive03#New_Proof and continues for the rest of Archive03. There was even an RFC where the "computational proof" was rejected. — Carl (CBM · talk) 21:08, 24 September 2008 (UTC)[reply]
That is my recollection as well; Zero sharp (talk) 22:07, 24 September 2008 (UTC)[reply]

Error in new proof

The new proof also has problems with correctness, such as:

Write the computer program ROSSER to print its own code into a variable R and look for either 1.R prints to the screen or 2.R does not print to the screen. If ROSSER finds 2, it prints "hello world" to the screen and halts, while if it finds 1, it halts without printing anything.

S can prove neither ROSSER prints to the screen nor the negation ROSSER does not print to the screen. So S is necessarily incomplete.

This does not prove the incompleteness theorem without the assumption that when S proves that "R prints to the screen", this actually means there is a computation in which R prints to the screen. The statement "R prints to the screen" is a Sigma^0_1 statement, and so it is still possible for S to be consistent but not ω-consistent, in which case S may well prove Sigma^0_1 statements such as this without there being any actual natural number that satisfies them. That is, S may prove there is a computation history which ends in R printing to the screen, but it may be that the only such histories are encoded by nonstandard natural numbers. Thus it will be necessary to assume S is ω-consistent in order to actually obtain a contradiction.— Carl (CBM · talk) 21:17, 24 September 2008 (UTC)[reply]

As I recall, I thought Ron's proof through here, and it did turn out to be correct, though it required a subtle observation that he didn't bother to mention. I don't remember the details at the moment and don't have time to think about it right now. --Trovatore (talk) 21:22, 24 September 2008 (UTC)[reply]
If you're thinking of the argument I think you are, there's not nearly enough explanation there to follow it. That is, you may be thinking that if S proves that R prints something, it means that S proves that (S proves R does not print anything). Also, S will prove (S proves R does print something). Thus S will prove (S proves a contradiction). That leaves an argument to be made that S actually does prove a contradiction, especially since we all know that there are consistent theories that prove Pvbl(0=1). — Carl (CBM · talk) 21:41, 24 September 2008 (UTC)[reply]
I think the point was something like, if S proves that R prints, or if it proves that it does not print, then R must in fact halt (because it would find the proof), and S must prove that R halts (because S is strong enough to find all proofs of halting). Therefore S must prove both that R prints and that it doesn't print. Or something like that. --Trovatore (talk) 21:49, 24 September 2008 (UTC)[reply]
I'm sorry for being so brusque. My overall concern isn't whether the argument is "fatally flawed" - I'll assume, by way of good faith, that it would be possible to patch together a convincing proof using essentially the method provided, even if the current wording isn't there yet. My first concern is that the text provided is not even clear to people who do have a good understanding of what's going on - so it will be incomprehensible to people who don't already have a good understanding. My second concern is that it seems that the only way to actually draw a contradiction from this type of argument is to be more precise about the arithmetization, and this undercuts the general argument that the "computational proof" is clearer because it avoids arithmetization. My third concern is that, since this argument is not presented in texts, we have nowhere to send people who ask "where can I read a full explanation of this argument"? — Carl (CBM · talk) 22:38, 24 September 2008 (UTC)[reply]
I'm certainly not saying that Likebox's proof should be included as is (or even, necessarily, at all). It's true that it really ought to be sourced, and I also agree that Ron is not paying enough attention to the need to connect the result with arithmetic (which after all is where the question comes from).
I do think though that it's kind of a nice way to think about it, once you make your way through the land mines. Here's the completion to the argument. Suppose S proves or refutes the claim that R prints. Then R must in fact halt. When it halts, it is either on a proof from S that R prints or a proof from S that R doesn't print, but then it does the opposite. S is then strong enough to prove that R follows that finite sequence of steps and does that opposite thing. Thus in either case, S proves both that R prints and that it doesn't print. --Trovatore (talk) 00:03, 25 September 2008 (UTC)[reply]
Thanks, I see. For some reason when I was reading between the lines earlier I was reading a different proof into it. By the way, I thought we ought to have an article on Rosser's trick. — Carl (CBM · talk) 00:28, 25 September 2008 (UTC)[reply]

Look, the reason I put it in is because I am still sure that this is the clearest proof for a young person who knows how to program, but doesn't care about the useless gobbledegook that logicians traditionally surround Godel's theorem with. The reason your complaint is not a mistake is explained by Trovatore, and it is something that must be understood to understand Godel's theorem. An axiom system to which Godel's theorem applies proves what the finite time behavior of a computer program is. It will figure out what the final state is for any algorithm that halts. The reason I am checking again is that I figured that once people figured out I was right, they would change their minds.Likebox (talk) 02:46, 25 September 2008 (UTC)[reply]

What you put in wasn't a proof at all - it was a claim that a certain construction could be turned into a proof. Please see my comment of 22:38. — Carl (CBM · talk) 02:51, 25 September 2008 (UTC)[reply]
That depends on the reader. It is so straightforward to turn the construction into a proof that I left it up to the reader. The reason I keep harping on this is that we need to really put simple proofs for the simple stuff so that the more complicated stuff gets clear. I don't want to wait a hundred years when we can do it today. If you don't want to help, don't help, but please don't stand in the way.Likebox (talk) 03:11, 25 September 2008 (UTC)[reply]
It's not at all "so straightforward" to turn it into a proof, particularly when it's written in a way that obfuscates what's going on (e.g. the switch from "halts" to "prints" only adds complexity, as does the use of the recursion theorem). As has been said before, during the RFC: if you want to change the entire field of recursion theory, you should start by publishing a book, rather than trying to push your views here. Wikipedia follows the lead of professional publications, rather than breaking new ground. — Carl (CBM · talk) 03:36, 25 September 2008 (UTC)[reply]
In particular, I would specifically like to see a source provided for any mathematical logic text that proves the incompleteness theorem using the recursion theorem (that is, using programs that begin by using their own source code as the input to some process). — Carl (CBM · talk) 03:56, 25 September 2008 (UTC)[reply]
Look, you know very well that you're not going to find a published version which presents this proof verbatim. But you also know very well that it is correct and that it is not original research. It's one of those countless things in that grey area where the results and methods are well known, but nobody has the patience to write up and publish (certainly not me). I am asking you to use your judgement to decide whether the article is improved.Likebox (talk) 04:31, 25 September 2008 (UTC)[reply]
My judgment is that the article is not improved; that is also what the RFC said. But I'm not asking you to rely on my judgment. If this isn't original research, it has been published at least "in spirit" somewhere else. Where? I don't know of any published text that thought that this method of proof was the right way to present the incompleteness theorems - and they are a standard topic in mathematical logic texts. — Carl (CBM · talk) 04:35, 25 September 2008 (UTC)[reply]
If you follow the argument, you can see that it is logically equivalent to Godel's original article, except that it talks about computers, which didn't exist then. It's also the exact same proof in recursion theory textbooks replacing "Kleene's fixed point theorem" with "print your own code" and replacing "phi of phi of x" with a description of what the code does. Look, I simplified it by removing the nonsense notation and by folding in the proof of the undecidability of halting, but that's it. If you can't accept these minor adjustments, I think that the edits you make to this page are going to be counterproductive.Likebox (talk) 04:42, 25 September 2008 (UTC)[reply]

>>These are FAR from "minor adjustments" and I take extreme exception to you characterizing them as such, and characterizing other editors' (particularly CBM's) repeated attemtps to _EXPLAIN_ this to you as 'standing in the way' -- it's arrogance bordering on incivility. Zero sharp (talk) 13:54, 25 September 2008 (UTC)[reply]

I am trying to be civil. But I am also trying to convince you folks of something that I understand 100% clearly, and that I don't understand why you don't understand. This is frustrating at times. I am sorry if I was rude.Likebox (talk)
Except that texts in recursion theory don't use the recursion theorem at all in proofs of the incompleteness theorem. They use the fact that the set of provable statements and the set of unprovable statements form a recursively inseparable pair of recursively enumerable sets. Can you point out a recursion theory text that does use the recursion theorem in the discussion of Goedel's theorem? — Carl (CBM · talk) 04:45, 25 September 2008 (UTC)[reply]
That's what I mean by "folding in". If you prove theorem A and then prove theorem B using theorem A, then you can fold in theorem A by not using it, and instead using the same method of proof. I am never going to cite any recursion theory text, because they are all sucky.Likebox (talk) 04:49, 25 September 2008 (UTC)[reply]
We've been through this before, at the RFC. Wikipedia isn't the place for you to pursue an agenda to change the field of recursion theory... What has changed since the RFC? — Carl (CBM · talk) 04:52, 25 September 2008 (UTC)[reply]
What's changed? Nothing. Including a) Likebox engaging in tendentious editing in and b) Carl's unbounded patience in explaining why - for one - the proposed change does not improve the article, not just in his judgement, but in the judgement of other WP editors who participated in the RFC, and - for another - absent any publication, book, webpage, lecture note, cocktail napkin *showing* this use of the recursion theorem, it is Original Research. Period. Likebox, please, there are an uncountable infinity of other venues on the internet to present your work, I think it's been amply demonstrated TIME and TIME again that this article, is NOT the place. Let it go. Zero sharp (talk) 13:50, 25 September 2008 (UTC)[reply]
The most important thing that's changed is that almost everyone now understands that the proof is correct. If it fails now, maybe next time you will understand that it is equivalent to Godel's original article.Likebox (talk) 19:27, 25 September 2008 (UTC)[reply]

>> " almost everyone now understands that the proof is correct." -- O_RLY? Can you point to one _shred_ of evidence to back up that claim? Zero sharp (talk) 00:07, 26 September 2008 (UTC)[reply]

I meant that both Trovatore and CBM now understand that it is correct, as you can see by reading the previous comments. CBM was confused about omega consistency for a little bit (to tell the truth, so was I at first, but then again, so was Godel). They might not agree that it should be included, but that's a different story.Likebox (talk) 05:06, 26 September 2008 (UTC)[reply]
This proof is so not original research that if you wrote it up and tried to publish it you would be laughed at! Everyone who understands Godel's theorem has some version of this construction in the back of their heads. I will return to this subject periodically (once a year, so as not to be intrusive) until it is accepted or someone comes up with a better way of writing this article.Likebox (talk) 19:33, 25 September 2008 (UTC)[reply]

>> Then why are you unable (or unwilling) to produce ONE SINGLE SOURCE OR REFERENCE for this formulation if it's so pervasive, universal and 'in the back of everyone's head' (sad to say, the back of everyone's head in no way meets criteria for verifiable sources in wikipedia). Zero sharp (talk) 00:07, 26 September 2008 (UTC)[reply]

Dude, there is no source, but this is mathematics. None of the proofs that appear on Wikipedia are properly sourced. If they are wrong, people just fix them.Likebox (talk) 05:06, 26 September 2008 (UTC)[reply]

3rd Opinon

I am requesting a WP:3O, specifically on Likebox's insistance on his changes to the article, despite repeated attempts by multiple editors to explain to them that, irrespective of their correctness, they are not in place in this article. The technical objections have been amply explained, but clearly this edtior needs to understand how WP is supposed to work regards: consensus, no tendentious editing and civility. Zero sharp (talk) 13:59, 25 September 2008 (UTC)[reply]

Actually, I think civility is not an issue here (other than that some effort is needed to stay civil when arguing with Likebox – CBM is spectacularly good at that), and tendentious editing is a bit of a red herring. Let's try to stay focused. --Hans Adler (talk) 14:20, 25 September 2008 (UTC)[reply]
I am viewing the problem here as a non-mathematician. This proof is so difficult that I cannot even begin to support use of a confounding alternate proof (or proofs) from e.g. Likebox or anyone else. Since the early 1980's -- when I first became aware of Goedel's proof via The Undecidable -- I've been alternately fascinated, intimidated and challenged by it. It's taken me 20 years to understand its jist, but I still can't reproduce its details. Nagel and Newman's 1958 was mandatory (bought at the same time as The Undecidable -- 1984). Due to the way I learn I like to see original papers. Given the difficulty of Goedels' paper, what I really need is a good "trot" -- a side-by-side description of what's going on (e.g. Goedel on the left page and a description of what's going on on the right). Because of wikipedia's restrictive format this unfortunately is not possible. But recently CBM et. al. made a valiant effort to provide something like this in section 7. To repeat, I just want to see Goedel's proof explained -- not someone else's proof (okay to survey the published alternates, but I'd move to the end). [In that regard I do not agree with some of what was written above re omitting the notion of primitive recursion in the explanation of the proofs. Theorem V appears to be the nut of the whole thing, and Goedel only provided a skeleton-proof of it. And re Rosser's trick to be able to read Rosser's 1936 in the original the reader needs to understand what's going on in Theorem V, IMHO.] Bill Wvbailey (talk) 16:05, 25 September 2008 (UTC)[reply]
I am trying to say that if you have patience and think things through, you can see that the one sentence constructions that I gave here are exactly this gloss you are asking for. If you keep the construction in mind, that's all you need to follow Godel's original paper with no further additions. The reason Godel's paper is complicated is that he is constructing a very difficult computer program--- the program which performs logical deductions on a given axiomatic system--- in the language of arithmetic using recursive functions as his programming language. If you know modern computer languages, you can see what is going on immediately, and also that this is going to be impossible for anybody else to follow. It requires many arbitrary choices (arithmetization--- Godel's ascii code, choice of subroutines--- his text manipulation functions, etc).
Once he constructs this program (partially-- he never completes the deduction algorithm in his original paper--- but it is clear he could. He promised a second paper which would complete the proof, but it wasn't necessary; people understood that it could be done), he uses it to construct a sentence which is self contradictory. But he uses a trick there to make the sentence self-referential, a trick which is equivalent to printing your own code.
The logical core of the proof is that given a program which deduces the logical consequences of axioms, there is a simple manipulation which produces a Godel sentence, involving printing your own code. The construction I gave just performs the construction explicitly in the clearest way I could manage, and this involves phrasing the Godel sentence as "Godel does not halt", where Godel is described in the article. It follows Godel's paper exactly, and so reproduces the problem Godel had with omega-consistency. The Rosser construction here takes more liberties with Rosser's method, which was more complicated. Rosser used a slightly more nested construction to come up with a contradiction, but it is essentially the same idea.Likebox (talk) 19:20, 25 September 2008 (UTC)[reply]

Proposing a different approach for the computability section

After some reflection last night, I think that a large flaw in the existing computability section is that it is narrowly focused on one proof (the one sourced to Kleene), rather than the broader relationship between computability and incompleteness. So I propose that instead of giving any long proof sketches in that section, we briefly discuss several different ways that computability and incompleteness interact. This version is my first attempt at this. I think that this will be more informative to a reader (who is not assumed to have a background in computability theory) than a more technical discussion would. What do you think? — Carl (CBM · talk) 15:25, 25 September 2008 (UTC)[reply]

This is the same road we travelled before. Nobody finished the proof here, and I don't think anyone ever will. It's just too pointless today--- who's going to precisely specify "Bew" (the deduction algorithm) in recursive functions? Why not include a complete proof? Anyone can follow it (despite the claims here), and it is equivalent to Godel's original approach (not quite to Kleene's--- but the rephrasing in terms of computer programs is Kleene's), in modern language.Likebox (talk) 19:24, 25 September 2008 (UTC)[reply]
Which proof is it that nobody finished? — Carl (CBM · talk) 20:12, 25 September 2008 (UTC)[reply]
Sorry--- I meant "Proof sketch for the first incompleteness theorem".Likebox (talk) 20:39, 25 September 2008 (UTC)[reply]
The thing that arguably isn't finished there is just the construction of "Bew" (to explicitly make it you have to work very hard, and to make it obvious that it is possible to make it you need to talk about universal computation in arithmetic, and that gets you back to the computational stuff).Likebox (talk) 20:42, 25 September 2008 (UTC)[reply]
On the other hand, to make it obvious that Kleene's T predicate is actually represented by an arithmetical formula, as would be necessary to flesh out the computational proofs, you have to work equally hard. Depending on the specific model of computation, it may still be necessary to formalize enough provability to allow the computations to operate on proofs. The arithmetization is always going to be present in some form or another. — Carl (CBM · talk) 21:01, 25 September 2008 (UTC)[reply]
True, but that is so intuitive for a computer person that it requires no explanation, because when you program a computer, most of your education is spent "fleshing out the T-predicate". Not so for logical deduction, which is a specialized industry.Likebox (talk) 21:33, 25 September 2008 (UTC)[reply]
I should elaborate because I think you are right in a deeper sense: for every foundational advance in mathematics there is a law of "conservation of hard work". There is no magic--- a new result in mathematics is important when it increases the complexity of mathematical thought, and there is a certain level of calculation complexity that one must work through to understand it. You can rephrase the proof, but you are never going to get reduce the work below a certain level.
But what you can do is construct a general framework of simple results which are well accepted and which turn the initially complicated proof into a triviality. For example, the first proofs of the Jordan curve theorem were difficult, but in a general framework of algebraic topology they become simple (although the hard work is just shuffled into the elemetary theorems). In the case of Godel's theorem, it led to the construction of an elaborate framework in the past 70 years, and this is called "computer science". If you have this framework, the difficulties in Godel's proof all vanish, because clarifying these difficulties gave birth to the field.
But at the same time, a parallel framework called "recursion theory" was constructed by academic logicians, and unlike computer science, this framework does not get everyday use. So it is very unfamiliar to the general public, and theorems and proofs phrased in this framework are obscure. This gives Godel's theorem a bad reputation for difficulty, and unlike other parts of mathematics, it doesn't get any easier with time.
Godel was the first to understand universal computation (he conjectured that general recursive functions were the most general computable functions), maybe not as well as we do today, but he knew this was what the main point of his work was, his most lasting contribution. Now that universal computation is sitting on my desk, it's silly not to use it. Using the language and elementary results of universal computation removes all the difficulties in Godel's proof, and makes it accessible to everybody.Likebox (talk) 21:50, 25 September 2008 (UTC)[reply]

Place deleted section here (so others can follow the discussion)

I thought that I found the right wording for the computability section this time (I feel it was much improved compared to last time), so I am placing it here for future reference, and so that others can follow the debate above. I can see that the time is not right for it.

For CBM--- the switch from "halts" to "prints" is unfortunate but necessary, because if you just have one bit (halts or doesn't halt) then you don't get the Rosser contradiction. Rosser managed to get his contradiction by nesting Godel constructions to get two alternatives and then making a sentence which asserts that one of the alternatives is realized. I tried to emphasize the freedom in his construction (he could have used any old property) by making it "prints to the screen". It could have been "sets variable froo to zero" or "calls subroutine Bew twice" (I think that's what Rosser did, I don't remember).Likebox (talk) 20:07, 25 September 2008 (UTC)[reply]

Computational formulation

Stephen Cole Kleene (1943) presented a proof of Gödel's incompleteness theorem using basic results of computability theory. A basic result of computability shows that the halting problem is unsolvable: there is no computer program that can correctly determine, given a program P as input, whether P eventually stops running or runs forever.

Suppose for contradiction that there exists a computer program HALT(P) which takes as input the code to an arbitrary program P and correctly returns halts or doesn't halt. Construct a program that print its own code into a variable R, computes HALT(R), and if the answer is halts goes into an infinite loop, while if the answer is doesn't halt halts.

Kleene showed that the existence of a complete effective theory of arithmetic with certain consistency properties would decide the halting problem, and this is a contradiction. If an axiomatic theory of arithmetic can be formulated which proves all true statements about the integers, it would eventually prove all true statements of the form "P does not halt" for all computer programs P. To determine if P halts, run it. While to determine if a computer program does not halt, deduce theorems from the axiomatic theory looking for a proof that P does not halt. If the theory were complete, one of the two method running in parallel would eventually produce the correct answer.

Godel's theorem, following Kleene, can be restated: given a consistent computable axiomatic system S, one which can prove theorems about the memory contents of a Turing machine (a computer with potentially infinite memory), there are true theorems which S cannot prove.

Write the computer program GODEL to print its own code into a variable R, then deduce all consequences of S looking for the theorem R does not halt. If GODEL finds this theorem, it halts.
If S is consistent, GODEL definitely does not halt, because the only way GODEL can halt is if S proves that it doesn't. But if GODEL doesn't halt, that means that S does not prove it.

This is Godel's original method, and it works to show that S has counterintuitive properties. But it does not quite work to show that there is a statement undecidable in S because it is still possible that S could prove GODEL halts without contradiction, even though this statement would necessarily be a lie. If S proves that GODEL halts, the fact that GODEL does not halt is not necessarily a contradiction within S, because S might never prove that GODEL does not halt. An S which proves false theorems of this sort is called sigma-one inconsistent.

Write the computer program ROSSER to print its own code into a variable R and look for either 1.R prints to the screen or 2.R does not print to the screen. If ROSSER finds 2, it prints "hello world" to the screen and halts, while if it finds 1, it halts without printing anything.
S can prove neither ROSSER prints to the screen nor the negation ROSSER does not print to the screen. So S is necessarily incomplete.

This is Rosser's trick, used to get around the limitation in Godel's original proof, and it completed the proof of the usual statement of the first incompleteness theorem. The precise assumptions used in the proof are:

  1. S will eventually prove any theorem of the form "A universal computer running for N steps will take memory state A to memory state B" where N and A are prespecified integers and B is the result of the computation. This is true in any theory which includes the fragment of Peano arithmetic without induction.
  2. S can state theorems about computer programs of the form "Forall N program P running for N steps does not produce a memory state M with the k-th bit zero" and conjunctions and disjunctions of these. These are statements with one quantifier in front.

The second assumption is needed to be able to state theorems of the form "R does not halt" in the language of S, while the first assumption ensures that if a program P halts after N steps leaving memory state M, S will eventually prove this statement.

To prove Godel's second incompleteness theorem, one must be able to formalize the proof of the first incompleteness theorem in S. If this can be done, then S cannot prove itself consistent, since from the assumption that S is consistent, it follows that GODEL does not halt. If S proved itself consistent, it would also prove that GODEL does not halt, which is impossible.

Godel's next major paper: Lengths of proofs

Shortly after Godel proved the incompleteness theorem, he proved another theorem which has a long history. It was published in 1936 with a proof that nobody could really understand, and it was considered of dubious validity for a long time. Credit for the theorem is therefore sometimes given to a much later paper by authors whose name I will not mention, who claimed that they fixed the "gaps" in the proof.

This proof appears in The Undecidable on p. 82. It was translated from "the original ariticle in Ergebnisse eines mathematischen Kolloquiums, Heft 7(1936) pp. 23-24, with the kind permission of the publishers, Franz Deuticke, Vienna" (ibid). This version appears with an added "remark added in proof [of the original German publication]:
"It may also be shown that a function which is computable in one of the systems Si or even in a system of transfinite type, is already computable in S1. Thus, the concept "computable" is in a certain definite sense "absolute", while practically all other familiar metamathematical concepts (e.g. provable, definable, etc) depend quite essentially on the system with respect to which they are defined" (p. 83).
Bill Wvbailey (talk) 22:07, 26 September 2008 (UTC)[reply]

The paper is extremely short, and it was a little difficult for me to follow because of the usual problems with Godel's work--- he understood computation at a time that nobody else did--- and his notion of computation is entirely tied down to the deduction method in formal logic. The reason I am bringing this up here is to explain how close the proof above is to Godel's original reasoning: if you formulate Godel's argument as above, you can immediately see what Godel was thinking in his next paper.

The theorem has three parts:

1. In any (consistent computer-describing) axiomatic system S, there are theorems of length N characters which have a proof whose length exceeds F(N), where F is any computable function of N.

Given a computable function F, construct NROSSER which prints its own code into the variable R, and looks for all proofs of length less than F(N), where N is the length of R. NROSSER looks for the theorem R does not print, and if it finds it, it prints and halts. Once it finishes running, it halts without printing.
Since S is consistent, NROSSER halts and does not print, and S will eventually prove it, but necessarily with a proof of length greater than F(N).

2. If you increase the strength of S by adding "consis S", the proof of some statements is made shorter by an amount which exceeds any computable function, meaning that for any computable function F, there exists a theorem provable in both systems whose old proof is so long, that it is bigger than F(new proof length).

It is easy to prove from consis S that NROSSER does not print. This is a fixed length proof, depending only on the size of NROSSER, not on the value of N or on F.
(Rosser's trick came later than this paper. I don't remember the exact method in this paper, and whether it implicitly had a Rosser trick when turned into an algorithm. It is possible that Godel discovered a Rosser trick here, but didn't notice. I don't know.)

Godel also claimed

3. If you have any system S' which is computationally stronger than S (meaning there is a program P which S' proves does not halt which S does not) then there are proofs in S' which are shorter by an arbitrary computable function, in the same way as 2.

I convinced myself it could be done, but I forget how.
I remember now--- it's a little tricky. Here's the basic idea-- write program TESTP which runs P for N steps where N is huge, to see if P halts. If TESTP finds that P halts, it prints "halts" and halts. If P did not halt after N steps, TESTP halts without printing anything.
The theorem TESTP doesn't print is provable in both S and S' (since P doesn't halt) but the proof in S' is trivial (of fixed size independent of N) since S' proves that P does not halt. The proof in S, on the other hand, has to follow TESTP until it halts, which means the proof in S is much longer.
The trickiness comes in establishing the precise amount by which the proof in S is longer, and the corresponding problem in Godel was the controversial point of the proof. Godel's method was opaque (at least to me).
The way to do this (somewhat) clearly is as follows: you need to establish that if system S does not prove that P does not halt, then it does not have a uniform length proof that P does not halt after N steps. The proof must get longer as N gets longer.
If you had a function SLength(N) which gives you a lower bound on the length of the shortest proof in S that P does not halt after N steps, and if SLength(N) goes to infinity as N goes to infinity, then the theorem would (almost) be proved. Given Slength(N), write TESTP to run for M steps where M is such that Slength(M)>N.
But to make this work, you need to make sure that the implicit function M(N) which determines M in terms of N as above is computable. Then let TESTP run P for M(N) steps where N is the length of TESTP, as above, and the proof is finished.
Lemma1-- if S does not prove P does not halt, then it does not prove P does not halt after N steps with proof of length less than C characters.
This is not obvious--- I found out it is called "Kreisel's conjecture", and it is open. It seemed obvious to me, because if the proof has constant length then as N gets large the finitely many instances should suffice to prove it on N without extra assumptions, which is what formal logic requires to turn a statement into a (forall N) statement. But it is subtle, because the way to patch the proofs together is not immediate, and there is a paper out there claiming to have examples showing why.
The proof of the lemma gives a computable function lower bound for SLength which gives a computable upper bound M(N) which finishes the proof. The reason I separate out the lemma is because this is the only place where you need to muck around with logic, and the rest only requires the computational halting stuff.
I want to say--- this possibly marginally crosses the threshold for what I would call "original research", but it's still just an exegesis on Godel.

That's it. This theorem was controvertial for many decades, but Godel was convinced he understood the proof. I think I understand why he found this theorem so intuitive that he gave it a one-page proof--- it's because the deduction programs in his original proof have property 1 and property 2.

This is why I think that this method is the closest in spirit to Godel's original paper--- it reproduces both his mistakes and his future inspiration.Likebox (talk) 21:51, 26 September 2008 (UTC)[reply]

P=NP problem

I put in the example about the P=NP problem because I thought it showed an surprising relationship between undecidability and truth. The idea is to show a few other examples of such relationships:

  • If Goldbach's conjecture is undecidable, then it is true, since if it were false, it would have a testable counterexample and therefore be decidable.
  • If the twin prime conjecture is undecidable, it could still be either true or false (there might be a largest prime pair (p,p+2), without having a proof exist that the pair is in fact the largest). We would mostly agree that the twin prime conjecture has a truth value which is unknown. Its undecidability doesn't tell us anything one way or the other.
  • If the continuum hypothesis is undecidable (as was conjectured by Gödel and later proved by Cohen), a mathematical platonist like Gödel might believe it still has an unknown truth value, but others might say that the question is simply meaningless.
  • If the P=NP problem is undecidable, then like the twin prime conjecture it still has a truth value which is unknown, but consequence of undecidability is a sharp quantitative bound on how far away from truth it could be. It becomes "almost known" in the sense that P and NP are "almost equal". I felt that this quantitative aspect of the consequences of undecidability was interesting.

Does it sound reasonable to put in a paragraph like the above? 67.117.147.133 (talk) 21:13, 26 October 2008 (UTC)[reply]

The thing is that, apart from the continuum hypothesis, none of the others (Goldbach conjecture, twin prime conjecture, P = NP conjecture) is actually known to be independent of anything. On the other hand the continuum hypothesis is undecidable in ZFC. The section in the article is "examples of undecidable statements", and I'm not sold on an "examples of things that might be undecidable". This is especially true with P = NP, since there is no reason to think it should be independent of anything, and some heuristic arguments why it shouldn't be [1] [2] [3]. Of course I am not saying these should be in this article either. — Carl (CBM · talk) 22:50, 26 October 2008 (UTC)[reply]
More importantly, there is no reason to think that there are any meaningful undecidable statements at all. Maybe everything is either true and provable (in a strong enough system) or false.Likebox (talk) 23:51, 26 October 2008 (UTC)[reply]
Well, that's a very different question, The article does point to philosophy of mathematics for the issue of "absolutely undecidable" statements. But the examples given in the article are all for results that are (at least in some sense) meaningful, and also known to be independent of well-studied systems of foundational interest. (And of course any formula φ that is consistent with ZFC is provable in the "strong enough" system ZFC + φ.) — Carl (CBM · talk) 00:50, 27 October 2008 (UTC)[reply]
I agree that statements not known to be undecidable shouldn't be in "examples of undecidable statements"; however, I think examples like the above are worth mentioning in the article, presumably in another section. P=NP is probably decidable based on the general belief that NP-hard problems can't be solved in less than exponential time (a stronger statement than P!=NP) plus the theorem I mentioned, which says that if P=NP is undecidable then NP-hard problems can be solved in time just barely above polynomial (way below exponential). Scott Aaronson has also written a long article[4] that I had cited, about the possible independence of P vs NP. Those FOM posts are interesting, and I believe the result I cited is a sharper version of the argument that Tim Chow described. The stuff by Doria and da Costa is just about certainly incorrect, as Ralf Schindler showed. It is very unlikely (maybe proven impossible) that NP=EXP since EXP contains all of PSPACE. 67.117.147.133 (talk) 03:51, 27 October 2008 (UTC)[reply]
My general opinion is that the stuff about P=NP is too specialized for this article, but would be good in Independence (mathematical logic).
I think it drifts too far away from the main topic of this article, which is Goedel's theorem. The section on examples of other undecidable statements is already a side topic, and so material about how the hypothetical independence of P=NP would imply a certain bound in computational complexity theory is another elephant entirely.
I do think that the bullets you have above would be nice to cover. It would also be nice to expand Independence (mathematical logic) to a real article, instead of a stub. That article is explicitly about independent statements, so the material in your bullets is less tangential. We should also link to that article from the examples section here (instead of just to the list of statements undecidable in ZFC - that isn't a very satisfying link). — Carl (CBM · talk) 04:08, 27 October 2008 (UTC)[reply]

'There is no algorithm for deciding relations in which both + and x occur'

This quote comes from Dr. Goedel himself (in a 1964 Postscriptum to his simplified treatment of the incompletenes theorems delivered to the IAS in 1934: cf The Undecidable, p. 72). Can someone explain to me why this is true? Since we can define multiplication as a sequence of additions I find this a bit surprising. Is it because the multiplier has to work as an unbounded u-operator? But isn't that true of addition as well? (As I write this I'm thinking about the use of a counter machine algorithm wrt the Collatz Conjecture ). Thanks, Bill Wvbailey (talk) 15:28, 10 May 2009 (UTC)[reply]

Although you can define multiplication as repeated addition in the real world, there is no way to express the predicate
in the language with just 0, 1, addition, and equality, using only first-order logic. This is just a limit of the expressiveness of first-order logic; in general there are difficulties expressing inductive definitions in weak languages. The undefinability of graph connectedness in first-order logic in just the language of graphs is a similar problem where there is not enough strength to handle the inductive definition of reachability.
In fact, the theory of the natural numbers with 0, 1, addition, and equality, which is called Presberger arithmetic, is not only effectively axiomatizable, it is decidable, and thus much weaker than the theory of the natural numbers with 0, 1, addition, multiplication, and equality, which is very undecidable. The fundamental difference is that arithmetic with the language with addition and multiplication is able to encode and decode finite sequences of natural numbers in a way that allows them to be quantified, while arithmetic with only addition cannot do that. The ability to quantify over finite sequences then allows all primitive recursive functions to be defined. — Carl (CBM · talk) 22:58, 10 May 2009 (UTC)[reply]
I'm trying to noodle this out. I suspect the nut of the problem is this: whether or not logical AND(x,y) and NOT(x) can derived from the available functions and formation rules. Without AND and NOT we cannot "do logic" in our system, and therefore we cannot simulate any arbitrary computational machine (e.g. Turing machine) with our system, and therefore our system cannot “make statements about itself”. Thus x*y gives us AND(x,y) when x, y = {1,0}; we can also get it from NOT(x) = 1 ∸ x, and AND(x,y) = x ∸ NOT(y) where ∸ is proper subtraction. The IF x=y THEN ... ELSE ... construction is another source. Bill Wvbailey (talk) 16:38, 12 May 2009 (UTC)[reply]
No, it does not have anything to do with AND and OR; both of these are included in any case. The issue is exactly the problem of quantifying over all finite sequences of natural numbers. If one cannot quantify over sequences, one cannot quantify over execution histories of Turing machines, which are essentially sequences of natural numbers. — Carl (CBM · talk) 02:03, 13 May 2009 (UTC)[reply]
I had a feeling this is how you'd respond. Does any viable "system" always include "first order logic"? If so, then it must be buried (i.e. tacit) in simple recursion, and in e.g. counter machines and Turing machines -- there are no AND and OR instructions; AND and OR have to be arrived at by a sequence of instructions which include IF test THEN ELSE and composition. Maybe a place to start would be to give me a definition (or example) of what you mean by "to quantify"? Generalize? Count? Thanks, BillWvbailey (talk) 03:14, 13 May 2009 (UTC)[reply]
Yes, all the theories that Goedel's theorem is applied to include first-order logic.
By "quantify" I mean quantify as in quantifier. For example, to define multiplication from addition, one might say this:
if and only if "there is a sequence of natural numbers of length m such that the first number in the sequence is n, each number after the first is obtained by adding n to the previous, and p is the last number in the sequence"
The difficulty here is that m is a parameter, which can vary, but the formula that expresses the sentence in quotes would have to have a fixed number of universal and existential number quantifiers independent of m. Thus there is a need for quantifiers that range over finite sequences of natural numbers, along with machinery in the first-order language to manipulate sequences (encode, decode, and check their length). — Carl (CBM · talk) 03:27, 13 May 2009 (UTC)[reply]
This helps. Methinks I've conflated "Goedel systems" with Turing- and counter-machines and recursion (recursion being merely a component of a "Goedel system" . . . correct?). Will have to go away and mull. Thanks, Bill Wvbailey (talk) 03:55, 13 May 2009 (UTC)[reply]

"Illumination provided by Principia Mathematica "

" === Illumination provided by Principia Mathematica ==="

- In the preface to volume 1 of the Principia Mathematica, it is stated how a very large part of the labour of writing the work was expended on the contradictions that had plagued logic and the theory of aggregates. - After examining a great number of hypotheses for dealing with the contradictions, it gradually became evident to the authors that some form of the doctrine of types had to be adopted if the contradictons were to be avoided. - They chose a particular form of doctrine of types that seemed to them most probable while noting that there were other forms equally compatible with the truth of their deductions. - They then said it should be observed that the whole effect of the doctrine of types is negative: it forbids certain inferences which would otherwise be valid, but does not permit any that would otherwise be invalid, which meant the inferences that the doctrine of types permits would remain valid even if the doctrine of types were ever found to be invalid. - - We can see now, in hindsight, how a question had been raised. - Earlier in the preface, the authors had affirmed that the ideas and axioms with which they had started were sufficient to prove "as much as is true in whatever would ordinarily be taken for granted." - But if the doctrine of types forbade certain inferences which would otherwise be valid, had it been implied that, now, there were truths as-yet-unidentified to which no sequence of inferences would lead?

Could someone explain to me what this section has to do with the incompleteness theorems? — Carl (CBM · talk) 10:37, 12 May 2009 (UTC)[reply]

As written, not much that I can see. It's true that Goedel invoked PM as his core and then adjoined the Peano axioms "merely to simplify the proof and [they are] dispensible in principle" (van H:p.599 footnote 16). But I don't quite see what PM's restriction about "types" has to do with it other than to introduce the notion and serve as an unacceptable solution to a knotty problem. It is true that the intro to PM did "illuminate" (itemize, discuss) the paradoxes. What seems to me far more important historically was the challenge thrown down by Hilbert's 2nd problem and his subsequent clarifications in the 1920's, and then the attempts on one hand of Hilbert and Bernays to axiomatize mathematics against an increasingly-strong counter-current surrounding impredicativity, and then Finsler's 1926 (which everybody seems to forget) which as van H notes ". . . presents an example of a proposition that, although false, is formally undecidable. Finsler establishes the undecidability by suitiably modifying the argument used by Richard in stating his famous paradox . . ." (van H: 438ff). While commentators argue that Finsler's proof failed, he did succeed in presenting a formally undecidable proposition. And it's clear from his method (use of the diagonal argument) that there was plenty of activity in this area that has little to do with PM's known flaws. (Goedel says in letters that he wasn't aware of Finsler's work at the time, and subsequently he thought Finsler was a putz). Then there was Goedel's 1930 "Proof of the completeness of the axioms of the functional calculus of logic", shortly followed by his 1931. It seems clear to me that this new section should be condensed into a piece of a history section re what led up to the incompleteness theorems. Bill Wvbailey (talk) 15:24, 12 May 2009 (UTC)[reply]
I forgot also the development of "primitive recursion" and its role in Goedel's proofs. Bill Wvbailey (talk) 15:27, 12 May 2009 (UTC) cc'd the expurgated section here for reference purposes. Wvbailey (talk) 03:41, 13 May 2009 (UTC)[reply]

Weasle words/ no sources

"Many logicians believe that Gödel's incompleteness theorems struck a fatal blow to David Hilbert's program towards a universal mathematical formalism which was based on Principia Mathematica. The generally agreed-upon stance is that the second theorem is what specifically dealt this blow. However some believe it was the first, and others believe that neither did."

Who are these logicians? Who are the 'some'... etc. Shouldn't there be a banner that more sources should be cited 84.82.112.181 (talk) 14:14, 13 May 2009 (UTC)[reply]

Actually, all of that is explicitly sourced at Hilbert's second problem; the issue is that the text here did not properly link to that article. I copyedited it some. — Carl (CBM · talk) 19:02, 13 May 2009 (UTC)[reply]

Endless reference and supernatural numbers

Might I recommend adding two new sections: one elucidating the character of Gödel's statement as an endless chain of references rather than a self-referential statement, and another on the possible existence of supernatural numbers as a result of the addition of ~G as an axiom. In the first, we can explain that the chain of reference in the nested theories T, T', T", ... takes the form

G -- states unprovable --> G' -- states unprovable --> G" -- states unprovable --> ...

and that a proof of G would, by a translation theorem resulting from recursive axiomatizability, translate into a proof of G' in the next theory, the very proof that G states does not exist, thereby disproving the internal consistency of arithmetic.

In the second, we can explain how the addition of ~G as an axiom allows for the existence of a supernatural number that does not correspond to any numeral and yet inherits inductive properties in the theory T by playing the role of 'some number' that proves G'.

Such sections could pave the way to a clearer solution of the problem of incompleteness.

Scott7261 (talk) 12:42, 27 May 2009 (UTC)[reply]

Mistake in "Background" Section

Hello, I'm not absolutely sure about this, but I think there is a mistake in this article.

"A formal theory is said to be effectively generated if its set of axioms is a recursively enumerable set. This means that there is a computer program that, in principle, could enumerate all the axioms of the theory without listing any statements that are not axioms. This is equivalent to the ability to enumerate all the theorems of the theory without enumerating any statements that are not theorems."

I don't think this is right. It is possible for the set of axioms of a theory to be recursively enumerable, but for the set of theorems not to be. Indeed, we cannot enumerate the theorems without first having an effective decision procedure for whether or not a given sentence is a theorem. But using only an enumeration of the axioms we cannot achieve this - we cannot even decide for sure whether the given sentence is an axiom - it may not have turned up as one of the first 100 axioms in the enumeration, but it might be the 101st...

I believe that "its set of axioms is a recursively enumerable set" should be changed to "its set of axioms is an effectively decidable set".

Thanks. 118.92.139.152 (talk) 13:04, 11 June 2009 (UTC)[reply]

The text is correct. If the axioms are just enumerable, that is enough to enumerate all formal proofs, which in turn is enough to enumerate the theorems (= the conclusions of those proofs). This can be done perfectly well without knowing a decision procedure. For example one can enumerate the theorems of Peano arithmetic but there is no decision procedure for them.
Moreover, if a first-order theory has an enumerable set of axioms then it has an equivalent, decidable set of axioms. I forget who first proved this, but it is not difficult to reprove. It gives a second way to resolve your objection. — Carl (CBM · talk) 13:51, 11 June 2009 (UTC)[reply]
(edit conflict) You can create algorithms that work as the article describes, based on partial progress: it starts enumerating some of the consequences of the first 100 axioms, then goes onto interleave the consequences of the first 200 axioms, and so on. All theorems can be generated in this way in a desperately infeasible sense of "can". — Charles Stewart (talk) 13:53, 11 June 2009 (UTC)[reply]

clarify tag

An IP editor added a clarify tag regarding the word "true" in the statement of the theorem. The article presently states,

"The word "true" is used disquotationally here; the Gödel sentence GT states that no natural number with a certain property exists, and to say GT is true simply means that, in fact, no number with that property exists (see Franzén 2005 pp. 28–33). "

Frankly, I do not believe it can be put any more clearly than that. I would welcome suggestions from the IP or anyone else if that sentence is, in fact, unclear. — Carl (CBM · talk) 11:20, 16 July 2009 (UTC)[reply]

I reported "true" together with ref [1] as unclear after re-reading [1] several times, and most readers will agree I imagine. What is the point of writing things that nobody will understand, it needs redoing in a way that people can follow. How does GT for example relate to the text on the page, it's not obvious. Ref [1] seems to read that the word "true" is not necessary so what is it doing in the definition of the theorem on the page? How can anyone follow that twisted logic? It's completely nuts!
Apart from that, Franzel's discussion of the use of the word "true" in this context is quite involved (truth in arithmetic and mathematical statements etc...) and may not support using "true" in the article. I need to have more of a look at that.
There are various possibilities to rectify this: 1) Remove the word "true" from the definition and/or replace with some other wording more like Godel's original Theorem VI. 2) Remove or rewrite ref [1] so it is clear. 3) Get to the bottom of this using more leading references (preferred option) since I think it's important not to spread a popular misconception if it is one. —Preceding unsigned comment added by 92.39.205.210 (talk) 12:10, 16 July 2009 (UTC)[reply]
The use of the word "true" as in the statement of the theorem here is the same as in the statements given in all mathematical logic textbooks, so removing the word "true" would be inappropriate. The article directly says, "The implied true but unprovable statement is often referred to as “the Gödel sentence” for the theory." so I don't see much possibility for confusion when the footnote says, "the Gödel sentence GT". Could you be more specific about exactly what part of the sentence from the footnote that I quoted above is unclear? I still don't see how to rephrase it in any beneficial way. — Carl (CBM · talk) 12:26, 16 July 2009 (UTC)[reply]
Well, it's mathematically true, but not logically true, since one could hold that, say, PA+~Con(PA) is the right basis for interpreting statements of arithmetic. I think there is a tricky point here: I don't like not adding the appropriate qualifications here, but likewise, they are more likely to confuse matters than clear up the doubts that people like the IP editor have. The second sentence of the footnote, about truth relative to consistency statements over PRA, presents the right way of looking at the incompleteness phenomena, but few people looking up the footnote are going to latch on to this.
The way I've explained it in the past is that if you think the theory in question is true, then you should believe it is consistent. That belief is what commits you to a sufficiently strong interpretation of arithmetic to accept the Gödel sentence as being true. If you can find a way of believing in the truth of some axiomatisation of arithmetic without believing in the theory's consistency... well, this will not be a sticking point for so many readers. — Charles Stewart (talk) 13:28, 16 July 2009 (UTC)[reply]
Interesting comments but Franzel 2005 p3 clears this up for us by the definition there of the incompleteness theorem which in this context equates to a rewording as follows:-
Any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any consistent, effectively generated formal theory that proves certain basic arithmetic truths, there is an arithmetical statement in the language of the theory that can be neither proved nor disproved (i.e. is undecidable) in the theory.
So ref [1] can be removed as well as the contentious "true" word. This simplifies and clarifies exactly what is going on with the incompleteness theory. The truth of the arithmetical statement is not mentioned in Theorem VI and such truth is an independent topic. Personally I would say arithmetic comes from assumed axioms too and therefore there is no absolute truth in it either.
Further reading of Franzel 2005 p28-33 did not show justification for using "true" but rather a general commentary on the question. So a reference would be required for its use. In fact Franzel mentions Tarski on p32 showing A is true <--> A, so "is true" is superflous thereby supporting the new rewording above since you should avoid saying the same thing twice (pleonasm).
  1. ^ Carl Hewitt, Large-scale Organizational Computing requires Unstratified Reflection and Strong Paraconsistency in Coordination, Organizations, Institutions, and Norms in Agent Systems III, Springer-Verlag, 2008.
  2. ^ Solomon Feferman, Toward Useful Type-Free Theories, I, Journal of Symbolic Logic, v. 49 n. 1 (March 1984), pp. 75–111.