Jump to content

Talk:Mathematical induction

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 128.8.206.15 (talk) at 15:25, 29 September 2017 (→‎Logical and mathematical induction). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Template:Vital article

WikiProject iconMathematics Unassessed Top‑priority
WikiProject iconThis article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
???This article has not yet received a rating on Wikipedia's content assessment scale.
TopThis article has been rated as Top-priority on the project's priority scale.


Untitled

Older discussion is at /Archive

Equivalence to WOP

In the intro I read:

Indeed, the validity of mathematical induction is logically equivalent to the well-ordering principle.

what is it referring to? We have a section "proof or reformulation of mathematical induction" where it is shown that MI can be proven assuming WOP and some other axioms, but this is something different from "logical equivalence".--Pokipsy76 17:32, 28 August 2007 (UTC)[reply]

I've simply removed the sentence.  --Lambiam 13:31, 9 January 2008 (UTC)[reply]


It seems to me that the idea is that an analogous proof would go through for any well-ordered set. That is, that any well-ordered set admits a proof technique analogous to mathematical induction.

For example, on the ordinals, we have transfinite induction; on trees or lists over well-ordered sets we have the appropriate varieties of structural induction, and so forth.

-- Dominus (talk) 15:07, 9 January 2008 (UTC)[reply]

There is in fact a discussion of the issue that almost says this (and certainly implies it) at Structural induction#Well-ordering. In fact, we don't need a (total) well-order; a (partial) well-founded relation suffices. The following two statements are logically equivalent in standard predicate logic: (1) induction is valid for some domain with a given relation where the induction process does not use a "base case" but only an induction step, which consists of concluding to the validity of a property for some element from the inductive hypothesis of the validity of that property for all predecessors of that element in the given relation, and: (2) the given relation is well-founded. I'd be happy to add this to the encyclopedia, if it were not for the requirement of citing published reliable sources, which I don't have for this.
For the naturals specifically, however, the article Well-ordering principle seems an obfuscation to me, what with the naturals being a subset of the integers and the consideration of the embedding of the naturals into the reals as a complete space, and the mention of Modus Tollens.  --Lambiam 00:49, 10 January 2008 (UTC)[reply]
the reason it's stated as being logically equivalent is that, not only can you get MI from assuming the WOP principle and some of peano's axioms, but you can get the WOP from assuming MI and some of peano's axioms. each one logically leads to the other, hence they're considered logically equivalent. if i feel ambitious to look through old math texts for a citation, i will at a later time.

Is their a proof in predicate logic of Mathematical induction?

--Philogo (talk) 00:31, 29 February 2008 (UTC) --Philogo (talk) 21:50, 25 February 2008 (UTC) —Preceding unsigned comment added by Philogo (talkcontribs) 21:46, 25 February 2008 (UTC)[reply]

Yes I know Usually induction is given as an axiom or axiom scheme; but I have not seen a proof in predicate logic. Have you seen one? where can I see a proof in higher-order logic. --Philogo (talk) 13:21, 26 February 2008 (UTC)[reply]
It is not strange that you haven't seen a proof in predicate logic, because the principle cannot even be formulated in predicate logic. Usually mathematicians do not give theit proofs in a formal proof system. Proofs of the axiom of induction for various higher-order logics are given in (Hatcher, William S., 1982. The Logical Foundations of Mathematics. Pergamon).  --Lambiam 19:54, 26 February 2008 (UTC)[reply]
Thanks I see if I can lay my hands on same. I am not sure whether it is true to say that the principal cannot be formulated in predicate logic. If as you say it can be proven in higher-order logic, and any higher-order logic is a predicate logic, then it msut be expressible in that higher-order logic and so a fortiori in predicate logic. Mendelson, Mathematical Logic, Van Nosran Reinhold, 1964, page 103 has

(S9) For any wf Ά(x) of S, Ά(0) ((x)(Ά(x) ((x)(Ά(x')) (x)Ά(x))
... (S9) is an axiom schema providing an infinite number of axioms.
These axioms would be expressible in just first order predicate logic. So my question is whether there is any proof that any wf Ά(x) of S, Ά(0) ((x)(Ά(x) ((x)(Ά(x')) (x)Ά(x)) is a theorem. --Philogo (talk) 20:55, 26 February 2008 (UTC)[reply]

If the induction formulas have not been added as axioms, then for specific wfs there may be a proof, like when (∀x)ℬ(x) is a theorem, but lacking axioms for induction, such induction formulas cannot be proved in general, since there are nonstandard models of the integers for which induction does not hold. A simple nonstandard model is to adjoin an extra element ω to the standard naturals for which ω' = ω. Then if ℬ(x) is the wf x' ≠ x, the induction formula for this wf fails to hold in the model and should therefore not be provable (unless you're working with an unsound system).  --Lambiam 22:18, 26 February 2008 (UTC)[reply]
The first=order theory S referred to has a single two place predicate letter A and we may t=s for A(t,s); one individual constant a (written as 0) and three function letters f,g and h and we may write t' for f(t); t+s instead of g(t,s) and t' instead of h. S has eight proper axioms (I will not bother to list them) and the and axiom schema (S9) referred to above. ((S1) thru (S9) together correspond to - with one proviso - Peano's Postulates).

S, with the proper axioms, is of interest since adequate for the proofs of all the basic results of elementary number theory - according to Mendelson. What I am interested in is whether, for any wf Ά(x) of S,
"Ά(0) ((x)(Ά(x) ((x)(Ά(x')) (x)Ά(x))"
is a theorem; presumably not otherwise we would not need it as an axiom schema! But for any wf Ά(x),
"Ά(a) ((x)(Ά(x) ((x)(Ά(x')) (x)Ά(x))"
eg
"F(a) ((x)(F(x) ((x)(F(x')) (x)F(x))"

intuitively looks like a logical truth. Were it one, or were it included as a (logical) axiom, we would not need it as a proper axiom. Hence my original question, "Is their a proof in predicate logic of Mathematical induction?" or more precisely perhaps, is their a proof in predicate logic, (not necessarily first-order) of any wf of the form:
"Ά(a) ((x)(Ά(x) ((x)(Ά(x')) (x)Ά(x))"

Perhaps the answer is in Hatcher, William S., 1982. The Logical Foundations of Mathematics. --Philogo (talk) 00:24, 29 February 2008 (UTC)[reply]

No, it isn't possible to prove arbitrary induction instances using logic alone. It's easy to create structures in which various induction principles fail - start with the natural numbers and adjoin some extra elements greater than all the natural numbers. In the division between "logical axioms" and "mathematical axioms", the induction ones fall in the latter category. — Carl (CBM · talk) 02:23, 29 February 2008 (UTC)[reply]
it follows, does it not, that there is no proof of, eg

"F(a) ((x)(F(x) ((x)(F(x')) (x)F(x))"
no matter what interpretation we have of the ' function.--Philogo (talk) 13:34, 29 February 2008 (UTC)[reply]

It will depend on what formula F is. If F(x) is "x=x" then that particular induction axiom will of course be provable. Or maybe F is a logically valid sentence with no free variables at all. But there are many formulas F for which the corresponding induction axiom is not provable. — Carl (CBM · talk) 14:55, 29 February 2008 (UTC)[reply]

Thanks for that. On reflection I I believe I should have said, it follows that the sentence
(S9a) F(a) ((x)(F(x) ((x)(F(x')) (x)F(x))
is not logically true because there is at least one interpretation where it is false when the domain of the argument to F is not denumerable. I had thought that the function ”’” ensured that the domain of x was denumerable. Assuming not, then, were there some wff, G x, which would be true just in case the domain of x was denumerable then its conjunction with (S9a), i.e. (S9a) & G(x) i.e.
F(a) ((x)(F(x) ((x)(F(x')) (x)F(x)) & G(x)
might be a logical truth and if it were a logical truth but there were no proof of it in any theory, T, then any such theory T would not be complete. My apologies if this is all old stuff but I have not found much relevant discussion on this post Frege.--Philogo (talk) 18:22, 29 February 2008 (UTC) --Philogo (talk) 18:17, 29 February 2008 (UTC)--Philogo (talk) 18:29, 29 February 2008 (UTC)[reply]

I'm not sure what you mean by "logically true". Many theories are incomplete, and you cannot hold that against truths that are not covered by some such theory, but are included in another one.
In the counterexample I gave above to (S9), namely with the domain of non-standard "naturals" N ∪ {ω}, the domain is denumerable, so non-denumerability is not the issue.
The formulas you give for (S9) and variants are hard to interpret because the parentheses are not balanced. I don't have the 1964 edition of Mendelson, but the axiom schema (S9) I see in Mendelson (4th edition, 1997, ISBN 978-0412808302, page 155) is:
(0)   ⇒   ((∀x)((x) ⇒ (x ))  ⇒  (∀x)(x)).
Forming the conjunction (S9) & G(x) is meaningless, since the variable x is not bound. There is no predicate of first-order theory expressing that some domain is denumerable. In giving proofs in logic, it is irrelevant what properties a predicate such as G is meant to have or not have in an intended interpretation ("true just in case the domain of x was denumerable"), since in the proof you have no access to the interpretation, so you cannot use this extra-logical knowledge.
The successor function " " is just an arbitrary symbol in logic; like all predicate and function symbols, it has no other properties than follow from given axioms.
Basically everything CBM wrote above repeats points I already made in my reply from 22:18, 26 February 2008. I am sorry I was not clearer, but let me once more repeat the situation:
  1. If you add induction as an axiom schema, then (of course) you can now prove it for any wf of your logic, since each axiom is a theorem.
  2. If you do not add induction as an axiom schema, you can not prove it for arbitrary wfs. (That is precisely the reason it is added as an axiom schema: that you cannot prove it otherwise.)
  3. It is possible to prove the induction rule for some specific wfs, such as the tautological wf x = x.
 --Lambiam 15:06, 1 March 2008 (UTC)[reply]
Also, the issue of denumerability isn't important. Any first-order structure has an elementarily equivalent denumerable substructure, by the Lowenheim-Skolem theorem, so we could assume all structures are denumerable if we are only interested in whether they satisfy specific sentences. — Carl (CBM · talk) 15:14, 1 March 2008 (UTC)[reply]
Not sure as to the state of indentation here... but Lambiam (and others interested) should check out non-standard arithmetic and the compactness theorem, which is very useful for proving that certain properties are not expressible in first-order logic. For example, if you take the structure with universe and 2-ary relation A for which says "x and y are adjacent" - i.e. , then you can prove that there is a structure which is elementarily equivalent to (i.e. satisfies exactly the same sentences), but is disconnected - in the sense that there are two elements of which are not connected by a finite sequence of adjacent (according to A) elements. Thus there is no sentence in first-order logic which is true of a structure if and only if it is connected, since if there were, would satisfy it and would not; but we constructed them to be elementarily equivalent. 69.181.216.178 (talk) 22:27, 8 May 2009 (UTC)[reply]

Axiom of Induction

Is not the formula for Axiom of Induction wrong? It should be a & there it is a ^. Reko (talk) 06:05, 10 April 2008 (UTC)[reply]

The wedge is used in formal logic to mean "and" (or more accurately, in the intended interpretation, the wedge is interpreted as "and). --196.209.178.209 (talk) 20:05, 10 April 2008 (UTC)[reply]
For the notations used, see Logical connective.  --Lambiam 22:13, 10 April 2008 (UTC)[reply]


Equivalent proof by contradiction

Is it worth mentioning in the article that proof by induction can easily be converted to proof by contradiction by use of the logical contrapositive? (or would it just confuse non-mathematicians further?) The advantage of giving the argument as a proof by contradiction is that it avoids the "problem" of an infinite number of steps. Dbfirs 19:55, 4 July 2008 (UTC)[reply]

EVERY proof by mathematical induction can be written as a proof by contradiction by saying that if the proposed theorem is false, then there is a smallest counterexample. Then one show that the number immediately preceeding that counterexample would also be a counterexample, contradicting the "smallest" nature of the smallest counterexample. Probably that should be mentioned somewhere in the article. But not just as a fact concerning only that one proof. Michael Hardy (talk) 21:07, 4 July 2008 (UTC)[reply]
It seems to me that the concepts (induction and contrapositive) are orthogonal. It also seems to me that there is no reason to bring in the logical contrapositive. It would just confuse the article with TMI (too much information). DeaconJohnFairfax (talk) 22:10, 22 July 2008 (UTC)[reply]
The contrapositive ( take the smallest false case and show there must be a smaller one) is Fermat's "method of infinite descent". Maybe it could be spun off into a separate article (if it hasn't been already) if we don't want to discuss it here.CharlesTheBold (talk) 18:42, 7 December 2009 (UTC)[reply]

Redundant sections on "Transfinite Induction" - recommend delete one.

The subsection called "Generalization" is redundant with the section called "Transfinite Induction." I recommend that one of them be deleted. It seems to me that it would be better to delete the one called "Generalization" because it contains less information and is more difficult to understand than the one called "Transfinite Induction." DeaconJohnFairfax (talk) 22:13, 22 July 2008 (UTC)[reply]

Done. Good catch. --Trovatore (talk) 22:17, 22 July 2008 (UTC)[reply]

HOMEWORK/GEOMETRY

WAT IS THE MEANING OF CONSECUTIVE EVEN —Preceding unsigned comment added by 71.100.169.80 (talk) 00:10, 24 August 2008 (UTC)[reply]

A Thank-You to the Editors of the Page

The article took about seventy pages of my textbook and condensed the result into a (readable) two to three page document. You've been a wonderful help. As this evolves please continue to remember the non-experts. Nickjost (talk) 21:40, 27 September 2008 (UTC)[reply]

Example

The example section should use k rather than n for the proof. It is against convention to reuse n in the proof. Opinions? Tparameter (talk) 02:25, 20 May 2009 (UTC)[reply]

good point "m" or any other random constant should be used in place of n —Preceding unsigned comment added by Sidhu 2201 (talkcontribs) 14:29, 12 October 2009 (UTC)[reply]

I belive it should be k rather than n aslo. see below (BTW this is just a draft of how it should look with basic steps)


Prove that

solution

We have shown it is valid when n = 1 and we a proofed that it is valid for n+1 therefore the statement is valid for n= 1, 2, 3,…

James137 (talk) 00:30, 4 November 2010 (UTC)[reply]

Isn't the proof circular? It assumes that 1 + 2 + ... + k = k(k+1)/2, which is the statement that it attempts to prove.
Fungalmungal (talk) 03:07, 9 November 2010 (UTC)[reply]

no.. it's not circular. There is 2 things you need to do this:
1. Prove that the formula is right for n=1
2. Prove that if the formula is right for n, then it is also right for n+1
Thus, you will get, "If it's right for n=1, then it's right for n=2. If it's right for n=2, then it is right for n=3. If it's right for n=3, then it's right for n=4. And so forth". Wisnuops (talk) 05:20, 26 March 2011 (UTC)[reply]

1st vs 2nd order logic

I added a mention that the axiomatic definition in the article used second-order logic, since the issue of what constituted a property that one could use induction on bugged me for a long while between when I learned about (unformalized) induction in high school algebra, and when I studied it more formally much later. If it seems too distracting then please feel free to remove it. I have the feeling that it helps the exposition, but I'm not really sure of this. 66.127.55.192 (talk) 09:36, 17 February 2010 (UTC)[reply]

Subtle Disinformation Found and Removed, More May Be Present

Please see: http://en.wikipedia.org/w/index.php?title=Mathematical_induction&action=historysubmit&diff=332756958&oldid=331892638

Here an editor using IP 99.141.191.52 changed four numbers in al Karaji's proof, simply incrementing them all by one. The edit summary for the post was simply, "trolling", which was a completely honest description and an example of hiding in plain sight.

As I am not an expert on this subject nor have thoroughly read the bulk of the article, it's possible that other instances of insidious abuse are still present. Please be on the lookout.

Riyuky (talk) 16:54, 24 February 2010 (UTC)[reply]

Boolos, the sorites, and "Down the Slippery Slope"

An anonymous editor recently added a mention of the sorites paradox, which I think was a good idea. But to address this properly requires more discussion, I think, because the sorites paradox appears on its face to be a refutation of induction. George Boolos wrote an essay "The Justification of Mathematical Induction", reprinted in his anthology Logic, Logic, and Logic, which I think addresses this point adequately. It might usefully be mentioned here. I don't have a copy of the essay handy, but I will eventually try adding something about the sorites, based on that essay, if nobody else does it first. —Dominus (talk) 17:44, 4 March 2010 (UTC)[reply]

I had the wrong essay. It is indeed in Logic, Logic, and Logic, but the essay I wanted is Chapter 22, "Zooming Down the Slippery Slope". The essay deals directly with the sorites paradox, considering explicitly the argument: A. Zero is small. B. If n is small, then n+1 is small. C. Therefore, one billion is small. C follows from A, B, and the induction principle. Boolos is unwilling to discard the induction principle, and wants us to stipulate that A is true and C is false (if not, change the constants) and so the induction premiss B must fail. But, as Boolos says, "the trouble is that it looks true". I will take a closer look at the essay and see if there is anything there that might be usefully added to this article, or to the sorites article. —Dominus (talk) 21:04, 9 March 2010 (UTC)[reply]
It sure looks wrong to me. Suppose "1" isn't "small" (whatever that means!). Then even though n may be "small" (e.g. 0.000001) then n+1 isn't "small". Now If n is an integer and 1 is an integer, andthen none of n, n+1 etc are "small". BillWvbailey (talk) 23:31, 9 March 2010 (UTC)[reply]
Well, n can't be 0.000001 — in context, n is clearly a natural-number variable, and is not allowed to take real-number values. I think you don't have to agree with Boolos's argument, but you should give him credit for not making one that's trivially wrong. In particular you need to pick a notion of "small" for which the argument is not obviously wrong on its face. --Trovatore (talk) 23:52, 9 March 2010 (UTC)[reply]
Ahh, context, context, and context. Let's accept the context. Then, the question of how "small" is defined, and in particular the relationship between "1" and "small", leaps out like the proverbial hammer hitting the smashed thumb. So let's follow this through to the bitter end. Specify a little relational matrix such as { small R small yields small, small R large yields large, large R small yelds large, large R large yields large}, and let's define the relationship R as the classical intuitive "successor" (whatever that means). Then we have to define 1 as "small" or as "large". Define 1 as "small". Is "0" large or small? Define "0" as small. Per our relational matrix, 0+1 --> small+small --> small. Etc. Ergo all numbers are "small"; each successor inherits the characteristic "small". This is like a symbol "6" inheriting from symbol "5", via the relationship "+ 1", the characteristic "integer". About all I'm getting from this trivial exercise is that the symbol string "small" that we've associated with 0 and with 1, together with the relation-matrix and our definitions means that "small" is inherited as a characteristic (like "integer"). Nothing more. Bill Wvbailey (talk) 01:20, 10 March 2010 (UTC)[reply]
I see no value in arguing with my two-sentence summary of Boolos's article. The best one could hope for would be to refute my summary without ever fully engaging the real argument. I also see little value in arguing directly with the article on this talk page. The article is published, peer-reviewed scholarly research (Boolos, George (1991). "Zooming Down the Slippery Slope". Noûs. 25: 695–706.) by a well-regarded mathematical logician. Wikipedia core policy is that this material is appropriate for inclusion regardless of whether we think it is correct. —Dominus (talk) 05:42, 10 March 2010 (UTC)[reply]
Hey, you brought this up on a talk page. Ergo the people talk. I assert that your premise "If it is published then it must be put in a wiki article" is false. The question isn't whether it is published the question is "Is it useful to this article?" I'd say, based on the fact that, when I read your talk-page summary of Boolos' little whatever-it-is and even my non-mathematician's hair (what's left of it)stood on end, this should give you pause. At best this "infinite sorities" argument seems a distraction, and as I kind of demonstrated above, its apparent "punch-line" derives from its confusing formal operations on symbols with their meanings as defined outside the system. To my mind the "horse of a different color" section also is distracting in a similar way.
I did not say "If it is published then it must be put in a wiki article"; I only said it was appropriate for inclusion. And if you think that discussing the content of an article you haven't read is a good use of your time, please go ahead. —Dominus (talk) 14:49, 11 March 2010 (UTC)[reply]
I am reacting to what you wrote, not what Boolos wrote. Boolos is not going to write the entry into this article, you are. It's important that you have it correct if you insist on adding it. But two more problems come to mind with your rebuttals. (1) You insist above that because an eminence gris (aka Boolos) wrote something, it must be good/wonderful/true. This is a pronouncement ex cathedra. We should be debating the argument per its own merits and not accept pronouncements ex cathedra, (2) A sorities is a circular argument (at least my Webster's New Collegiate Dictionary says it is) -- e.g. A-->B & B-->C & C-->D & D-->A. Closing the loop is necessary for an argument to be a sorities (according to my dictionary). I checked 18+/-1 logic books (no kidding, including Boolos, Burgess, and Jeffry 2002 Computability and Logic Fourth Edition) but found nothing in their indices re "sorities". At this time I can see no way that a sorities has anything whatever to do with induction, but maybe you can expand the Boolos preentation and show us what's going on in more detail. Bill Wvbailey (talk) 00:27, 12 March 2010 (UTC)[reply]
I did not insist that because an eminence gris wrote something, it must be good, wonderful, or true. I only said that scholarly research published in peer-reviewed literature was appropriate for inclusion in Wikipedia. (I am glad that this time you did not falsely attribute a manufactured quotation to me; thank you.) As for your ignorance of the sorites paradox and your inexplicable failure to find it in at least 17 logic books, in the Wikipedia article to which I linked in my initial message, or in the references cited in that article, you might try: "Sorites paradox" entry in the Stanford Encyclopedia of Philosophy, and particularly the sections of that article that discuss its relationship to mathematical induction. —Dominus (talk) 17:32, 12 March 2010 (UTC)[reply]
RE improving the article: This article could use some "philosophic" development. I've read that crows can count to a certain number. Does this say something "deep" about neural wiring and its relationship to "number sequence"? And just what is the human experiential relationship between "inductive reasoning" and "mathematical induction"? I remember a fleeting reference to the fact that Brouwer tried to base "number sequence" on an intuitive notion of "time sequence". And there's the Melzak-Lambek machines based on digging a hole ("empty", "zero"), then throwing into or removing pebbles (discrete objects) from the holes per an instruction-list (so what is the relationship between an instruction list and induction?), and Post-Turing machines with their primitive tally marks (ditto). And how does the intuitive notion of one-to-one correspondence play into "induction" e.g. the way we used to keep track of distance in the army (put a pebble in the pocket for each 100 steps), shepherds keeping track of the count of their flocks (tally marks carved in sticks as the sheep pass through a gate, or pebbles moved between two bags)? What do the various "schools" -- Logicism, Intuitionism, Formalism, Platonism -- have to say about "mathematical induction"? Bill Wvbailey (talk) 16:12, 10 March 2010 (UTC)[reply]

I still don't get it... What if a domino was missing?

Perhaps it's just me, but after reading the entire article I can only think... Well what if I removed the 800th domino such that the 801th domino won't fall, or that it was missing in the first place. Thus the proof, isn't really a proof but just a big assumption or hope that everything in the world is right - and of course, it isn't. I just don't get it, how could any of this be used to prove anything? --Balupton (talk) 13:50, 19 May 2010 (UTC)[reply]

If the 800th domino is missing then the proof fails to show that anything holds beyond case 799. But suppose for example that you want to prove that
The proof doesn't involve any missing dominoes, so it works. That's an example of how it can be used to prove something. The formula above works when n = 1, and if it works for any particular n, then it works in the next case after that, with n + 1 instead of n. Hence all the dominoes fall. Michael Hardy (talk) 16:22, 19 May 2010 (UTC)[reply]

---

Here's my take on it: the axiom of induction is just one of a collection of axioms -- commonly called the Peano axioms -- that assume the existence of an entity we name 0 (in some versions 1, as noted below 0 is similar to "empty tape-square" or "empty hole") and the "successor", in particular the "successor of 0" (that we name "1") and the successor of any successor; we commonly name these thingies that we write/symbolize as 0, 0', 0' ', 0' ' ', etc, or as 0, (0)', ((0)')', (((0)')')', or some other symbolism from set theory, etc, "the (counting) numbers", and we assign them funny symbols and funnier names like "1" and "2" and "3" that are nothing but abbreviations.
Exactly what a 0 really is, and successor really is, is in the Formalism point of view defined only by their behavior relative to the axiom-set that we (humans) define (thus there can be alternate axiom-sets). A Platonist (Platonism) would say that these things, whatever they are, really, really exist somewhere in a non-material universe that somehow our minds do have access to, hence we "come to know them" or "discover" them in the other sense of "induction", i.e. scientific induction of repeated observation followed by theorizing -- This was Kurt Goedel's and Emil Post's belief.
One view is that these are the behavior of "objects" in an utterly-mechanical device, as directed by an utterly-mechanical "table of instructions". There are a couple wonderful models of this -- the Post-Turing machine, and the counter machine. The design and behavior of either one embodies the axioms. But in particular the zero and the successor have a few usually unspoken qualities that you allude to in your example of the 800th missing domino, or e.g. the dominoes are misplaced, or one is short, or two fall at once.
With regards to the "0": In the Post-Turing-machine model, each tape square is the roughly the same size (although they don't have to be), and they are "contiguous in space" -- "located one after another" in a line so that either the observational "head" moves from side to side (room-to-room in Emil Post's model, square to square in Turing's model, left or right) or moves the tape left or right (a version of the Turing model). So this presupposes the existence of space and time and mass (contiguity, before and after, mechanical-thought-model). Any "blank square" is a location in space -- it is not nothingness; a blank square is not nothingness, it is a "thing" (matter, i.e. a blank sheet of paper is not nothingness). So "blank square currently being observed" might be symbolized/described by the cypher "0" and condition "blank", but this "0" aka "blank" is not nothingness, it is "blank square". In the same way, in the counter model, "empty hole labelled A" might be symbolized by "[A] = 0" ("contents of hole labelled 'A' is empty"), but this again is not a condition of nothingness (e.g. an empty glass is a thing).
With regards "the successor": We usually "idealize" the notion into a ultra-precise, measured "unit", what we think of as a "discrete" object (mass-length, or mass-volume, time-duration), such that all "1"s are "the same size", the "same shape", and have the same "mathematical power", and there are as many of them as we need (we can always create more if we need them). For the models, what's important is that the machine always moves its tape left and right one (discrete, unit) square at a time (Turing model, or the worker moves left or right one room at a time). Thus the Turing model presupposes/embodies the notion of "discreteness" and the notion of the "successor/predecessor", and it either marks or erases only the "observed" square only one mark per square. (Similarly for the Post model of a worker moving from room to room marking (only one mark allowed) or erasing a sheet of paper). And in the counter-machine model, each stone thrown into a hole is roughly the same size (needn't be, though), but it is not an amorphous blob that splits when it hits the bottom of the hole, nor does the impact split a stone already in the hole, nor does a single stone fills up the entire hole (we would have to dig the hole deeper if this happens -- the hole is "infinitely extensible" as is "the tape" or "the rooms"). So there are a lot of tacit assumptions about discreteness and emptiness and mechanical locations in space and behavior in time.
This may seem to have devolved into philosophy, engineering, and/or physics. But at least one well-respected mathematician has his doubts about the true-in-this-universe existence of "the infinitesimal" (infinitely divisible). Here's Gregory Chaitin commenting on discrete versus continuous:
"But I'm a mathematician, and each time I would wonder if Nature wasn't really trying to tell something, namely that real [he means numbers such as pi with an apparent infinite number of decimal places] and continuity are a sham, and that infinitesimally small distances do not exist!. . . . So, you see, there are lots of reasons to suspect that we might live in a digital universe . . . I'd like to continue in Zeno's footsteps as well as Rolf's and argue that a number with infinite precision, a so called real number, is actually rather unreal."(boldface in original, pages 92-93: Gregory Chaitin 2005 Meta Math! The Quest for Omega, Pantheon Books, NY, ISBN 0-375-42313-3.)
Hope this helps. Bill Wvbailey (talk) 17:12, 19 May 2010 (UTC)[reply]

Where do I go wrong?

Suppose I prove statement (S)omething to be true for all integers n by showing

  • S is true for n = 0,
  • S is true for n + 1 assuming it is true for n,

and then go on to show that the assumption that there is an integer k for which S is not true leads to a contradiction. [In particular, there woul'd be a least such integer which cannot be 0 or, say, 778.]

I claim that I have proved statement S to be true for all integers. I claim too that I have not used the Axiom of Induction. I'm quite sure about this.

What I am not so sure about is why the Axiom of Induction is just that, i.e. an axiom, not a theorem. A naive way to "prove" the Axiom of Induction would be to say that every proof using induction can be converted to the type above, hence you can use induction at will (saves a few lines of typing), hence the Axiom of Induction is true.

I suspect that one error is the claim that EVERY statement provable by induction is provable by the above method, not only those seen by mankind so far.

Perhaps the answer to my question is in this talk page already (or in the article), handn't read everything yet. YohanN7 (talk) 12:42, 11 July 2010 (UTC)[reply]

Apparantly, the Axiom of Induction (when stated formally) cannot be proven from e.g. the Axioms of ZFC. (Suppose this has to do with first order logic, second order logic, etc. I have no such background unfortunately) My question (the naive version) remains. YohanN7 (talk) 12:53, 11 July 2010 (UTC)[reply]

Hmm... In [peano axioms] it says "it is not possible in first-order logic to say that any set of natural numbers containing 0 and closed under successor is the entire set of natural numbers."

Ok, so I go wrong there too! Whats wrong with the contradiction type proof then? I.e. assuming that the set of integers for which S doesn't hold is nonempty, using wellordering, reaching a contradiction concluding S is all of the natural numbers. YohanN7 (talk) 13:21, 11 July 2010 (UTC)[reply]

The axiom of induction (more precisely, each instance of it) is provable in ZFC. It isn't provable in PA minus induction. --Trovatore (talk) 21:55, 11 July 2010 (UTC)[reply]
That clarifies matters. It's kind of natural too that the PA article (which is linked in this article in the section stating the Axiom of Induction) assumes PA and nothing else.
However, in the present article I can't see any default context. "Mathematical Induction" can be PA by default (because it's about natural numbers) or it can be ZFC (because it's within "mathematics", usually defaulting to ZFC). How about adding something like this:
The Axiom of Induction is a provable statement in first-order logic in some systems larger than PA, for instance, in Zermelo-Fraenkel Set Theory. YohanN7 (talk) 09:00, 12 July 2010 (UTC)[reply]

I feel pretty stupid. The whole thing is there. I managed to completely miss the section "Proof or reformulation of mathematical induction". Sorry about that. YohanN7 (talk) 09:26, 12 July 2010 (UTC)[reply]

Proofs, and the terms "well defined", "well-ordered" and "well-founded"

In the "Proof or reformulation of mathematical induction" section, there are two proofs of simple induction. The first one is a very trivial one showing that simple induction follows from transfinite induction. Do we need that? We might need that, but as it stands now it purports itself to be an alternative to the actual proof squeezing induction out of the axiom of choice. Perhaps it would be better off in another section? Also, the real proof (apart from being in need of some math-formatting) uses the sentence "Hence S is well defined" which I guess is a typo for either "well-ordered" or "well-founded" – the way it's worded here we do need that < is a (strict) well-ordering, but if worded differently we would only need that it's a well-founded relation. And in any case, we should probably not say that the set itself is a "well-founded set", as we do in the "Transfinite induction" section, since (according to Well-founded relation) the term "well-founded set" has another meaning. 85.226.207.108 (talk) 10:28, 12 July 2010 (UTC)[reply]

Domino vs. Ladder

The Description section currently says,

"It may be helpful to think of the domino effect; if one is presented with a long row of dominoes standing on end, one can be sure that:

  1. The first domino will fall
  2. Whenever a domino falls, its next neighbor will also fall,

so it is concluded that all of the dominoes will fall, and that this fact is inevitable."

I dislike this for several reasons. First, the row of dominoes is "long" but finite, while induction is used for infinitely many cases; this idea should be included even in informal descriptions. Second, in physically working with dominoes, they hang up a lot and are generally quite imperfect, which might have been a source of confusion in the above section on this talk page. Third, you don't usually stop dominoes at a particular position.

I find dominoes less intuitive to describe induction than a ladder analogy. Take an infinitely tall ladder. Suppose you can stand on the first rung, and you also possess the ability to go from one rung to the next. Inductively, you can stand on any rung. Ladders don't suffer from my complaints about dominoes, either. Thoughts? 67.158.43.41 (talk) 06:48, 4 January 2011 (UTC)[reply]

There is some merit to your complaint, but your suggestion may have a similar problem. As I read this, the image that popped into my mind was a tricky set of dominoes, each equipped with an automatic reset so a little while after it falls, it will pop up again. (Not off-topic: where I'm going with this is RE an attempt to use induction in a proof of the Collatz Conjecture). We arrange the dominoes so that the pattern eventually comes to a circle, or in a special figure eight, etc. i.e. any closed shape. So once you start them going they will do their linear domino thing, but depending on your pattern, they may (or may not) enter into a loop that goes 'round and 'round, ad infinitum. RE the Collatz Conjecture, if you pick the right coefficients, the sequence of generated numbers displays similar mysterious behavior. You start with any "seed" number and then crank the formulas. The resulting sequence of numbers descends toward 1, maybe for quite a few iterations, but then the numbers start up again, and then down again (they may have done this a number of times) but sometimes (not always!) they enter a loop and never get to 1 (worse yet, for a given pair of coefficients, there may be more than one loop a sequence can fall into). Thus an application of induction in a generalized (dis-) proof of the Collatz Conjecture would seem likely to fail us. Even a ladder, infinitely long, seems to have a similar problem: while it may be infinitely long, it may eventually go 'round and 'round, somewhere "out there". And if it's a really big circle, without the benefit of gravity, you won't know you're in a loop until you came back to a "junction" . . . and if, like a wagon-horse of yore, you're wearing blinders and you can only see the next ladder rung ahead, you won't know you're in a circle: you may be going 'round and 'round ad infinitum, but you're stuck in a circle. Bill Wvbailey (talk) 15:52, 4 January 2011 (UTC)[reply]
I'm familiar with the Collatz conjecture, but I didn't follow your reasoning. It seems a typical inductive proof of that conjecture would use strong induction to assume the conjecture is true for starting values 1, ..., n. With a ladder, that would be equivalent to possessing the ability to reach rungs 1 through n. I don't understand how closed loops fit into this analogy. The key ingredient to the proof would be showing you can reach rung n+1, i.e. that the conjecture is true for n+1. (I do understand there are "loops" in the trajectories of some starting values to the Collatz function, e.g. starting at 4, but that seems immaterial.) It would seem to be very strange for a ladder to form a closed loop, while having dominoes do so is actually pretty reasonable and seems to be another argument against dominoes and for ladders, since a closed loop analogy does not capture the essence of induction. 67.158.43.41 (talk) 13:35, 9 January 2011 (UTC)[reply]

Better Parsing for Axiom of Induction?

Sorry, I think I put this in the wrong place before. Never edited on a talk page before.

Hi, I don't generally post comments or edit wikipedia but after scrutinizing this article's formal symbolization of the arithmetic principle of induction, I would like to offer some criticisms and then suggest some changes. As such, I think that the axiom of induction as stated here in formal predicate logic is either incorrect or highly misleading. The problems I see are as follows: (1) generally one does not put parentheses around quantification. In fact, I should be more blunt: no one puts parentheses around quantifiers themselves. I.e., the parentheses around are not necessary and confusing. This leads to a lot of parsing difficulties, especially in the first antecedent. (2) Although it is acceptable, in modern logical notation it is not very common nowadays to put parentheses around the bound variables attached to a predicate. I.e., P(k) looks quaint to put it mildly. Where parentheses are necessary, as in the case of P(k-1) they should be adopted, but otherwise it is common practice to leave them off. (3) The use of square brackets as they are currently written only adds to the confusion. To be blunt again, it looks as though whoever wrote the formula was not aware of the fact that a quantifier like needs to be bracketed if it is within the scope of another operator (like implication in this case). (4) Although I don't think it is necessary, is also common practice to write a 2nd-order variable, written here as "P", as the capitalized Greek letter Phi. I will not however add that change here, but I do highly suggest considering it.

Given the above, I suggest the following correction:

If we wanted to be very persnickety about brackets we would also add them to each of the conjuncts in the antecedent of the primary conditional. I left that out however, since further brackets would probably do more harm than good, and no ambiguity presents itself as is. As such, I believe that this version above all parses much easier. It allows one to immediately determine the primary operator (the 2nd-order quantifier) because of the large square brackets. Furthermore, one can immediately see the internal implication whose antecedent is the conditions that one needs to prove in an inductive proof, and whose consequent is the desired conclusion that the property in question holds for all natural numbers.

Let me know what you think, --æntity 07:41, 11 February 2011 (UTC)

The above doesn't look correct. Nor does the exposition in the article -- we're missing some formalization (it isn't very hard, really). [This isn't supposed to be 2nd order induction is it? See more below re the treatment in Boolos-Burgess-Jeffrey 2002 re 2nd order induction]. Frankly, I'd recommend/prefer we stick with some formalization for which we can cite and go back to, rather than trying to invent some nomenclature. For example, in his chapter on formal number theory, Kleene 1967 writes first-order induction in a highly formalized manner, this way (notice that there's no lead "for all A":
"For Axiom schema 13, x is any variable, A(x) is any formula, and A(0), A(x’) are the results of substitution 0, x’ for the free occurrences of x in A(x). (These substitutions are automatically free)" (p. 206).
13. A(0) & ∀x(A(x) ⊃ A(x’)) ⊃ A(x)
He has previously allowed only concatenations of the following symbols ~, ⊃, &, ⋁, ¬, ∀x, =, +, ∙, ', 0, a, b, c , . . . , |, (, ) where and 0 defined as a "term" together with the variables a, b, c, . . ., [Here follows a recursive definition of (r)', (r)+(s), (r)∙(s)) ]. The three dots . . . are an abbreviation for "and so forth", the commas and any periods are also punctuation marks (but not the dot, which is used by Kleene for "multiplication"). The strange | is a subscript to be applied to variables.
In Boolos-Burgess-Jeffrey 2002:214 we see almost the same treatment with the use of an arrow-operator in place of the horseshoe, but notice some subtleties (again, we'd need to study his symbol-choice and syntax):
(A(0) & ∀x(A(x) → A(x’))) → ∀xA(x)
RE B-B-J 2002:283 treatment of 2nd order logic, we see this formulation (I didn't look at their syntax, I just cc'd their formula)
∀X((X0 & ∀x(Xx) → Xx’)) → ∀xXx)
The point there is, to do the job right, you have to establish the symbols and their syntax at the outset (at least give the reader some notion of what the syntax is.) And then give an interpretation. As noted in Kleene 1967:207, the intended interpretation of 0, 0', 0' ', 0' ' ', . . . is the counting numbers N, and ' is intended to be interpreted as +1, etc, etc. Bill Wvbailey (talk) 19:10, 11 February 2011 (UTC)[reply]
Thanks for the quick response! I definitely agree that we should stick with a formalization that we can cite, and BBJ is definitely the place for authority on the matter. Also, just curious, are you using the 4th or 5th edition? Right now I only have the 4th edition on hand, but can check the 5th tomorrow or the day after to see if there are any differences/changes.
Fourth edition. Caution is advised re this edition: on page 43 "The Productivy Function", there are some bad errors (j should be k). I checked back and these crept in because BBJ changed this treatment from the 1st-3rd editions (BB w/o J) but kept their original drawings. The BBJ treatment is inferior and I can't for the life of me see why they changed it. The errata is so bad I began to keep a list (I was up to 8 not including this one). Plus the sources are lousy. I hope the 5th edition cleans the messes up.
Anyway, the formulation as I gave it is correct --- its syntax does mirror the BBJ formulation on 283, but which you did not write correctly. On BBJ 4th edition 2002:283, the formula for induction is not ∀X((X0 & ∀x(Xx) → Xx’)) → ∀xXx), as you wrote, but rather appears as ∀X((X0 & ∀x(Xx → Xx’)) → ∀xXx). I believe you mistakenly added a parentheses to the right of the 2nd x (parsing from left to right).
You're correct, I added an extra ).
Nevertheless, the logical syntax of a mathematical (arithmetic) induction is represented by a conditional with an antecedent that contains TWO conditions: first the so-called base clause (the left conjunct), which is that the property in question holds for the numeral 0. The second condition is a conditional (the right conjunct), which states that if the property in question holds for a given value, then it also holds for that value + 1. If one succeeds in demonstrating both of these facts (i.e., both the left and right conjunct), then one can detach the consequent (via modus ponens for example), and conclude that the property in question holds for ALL (natural) numbers. Thus the logic that mirrors this piece of reasoning will look like this: [A & (B -> C)] -> [D]
Both the formulation I initially gave and the BBJ formulation's syntax does mirror that form. Notice also that my complaints about the formula currently being displayed are not present in the BBJ formulation --- the quantifiers do NOT have parentheses around them, the variables don't have parentheses around them, etc. --- as is customary with modern logical notation.
Yeah I wondered about the missing parentheses. I'd have preferred the parentheses myself. I can't find in my sources an agreed-upon treatment of 2nd order logic. Here's some examples of 2nd order language (to express mathematical induction examples) from Rogers 1987:385 (reprint of 1967). Here we're seeing the mix of brackets and parentheses:
(∀Â)[[0 ∈ Â & (∀a)[a ∈ Â ⇒ a+1 ∈ Â]] ⇒ (∀a)[a ∈ Â]]
(∀p)[[p(0)=1 & (∀a)[p(a) =1 ⇒ p(a+1) = 1]] ⇒ (∀a)[p(a) = 1]]
As for exact syntactic differences between the BBJ formulation and my suggested one, the one that I gave just specifies the domain being used for the first-order universal quantification (i.e., k is an element of the natural numbers, etc.). I wanted to change the formalization as little as possible from the original and just clarify its syntax (mostly getting rid of excessive parentheses) in order to make parsing easier. Compare the BBJ formulation with the one currently being displayed and you should notice the difference.
But what's going on here needs to be clearly spelled out for the reader. Essentially, you're taking a formal axiom (just a string of symbols) and giving it an "arithmetic interpretation" when you use N (the set of counting numbers and that very special symbol 0) (see the quote below from Hamilton 1978). If it were up to me, I'd stick with the successor (every book that discusses this -- Hamilton 1978, Robbin 1997, BBJ, Crossley et. al. 1972, Kleene 1978, Rogers 1978). See my suggestions at the end re a very-formal expression of the axiom and then recasting it into the "arithmetic interpretation".
As for arithmetic induction (or otherwise), as I understand it one either needs 2nd-order logic to express it, or an axiom schema. Regardless, again the displayed formula was already in 2nd-order logic, and is glossed as such in the article. Contextually then, it seems a moot point whether we should specify the formula in 2nd-order logic, or as an axiom schema, etc. Or rather, that is a different discussion altogether, and could simply be settled by providing two separate logical specifications --- i.e., one in 2nd-order logic and one with axiom schemas.
My understanding is the same. I make a proposal at the end. We need 2nd order logic to express Peano's full induction. If we restrict ourselves to 1st order logic/language then we have to use an induction schema. Hamilton's treatment of this is wonderfully clear (at least I can understand what's going on!):
"...because in [the arithmetic interpretation] N [cf p. 61] we are restricted to the use of the first order language LN, the axiom [Hamilton writes his (N7) as (A(0) & ((∀xsub>1(A(xsub>1)) → A(xsub>1’))) → ∀xsub>1A(xsub>1))] cannot be as strong or as inclusive Peano's fifth postulate. The reason is that Peano's fifth postulate contains a second order quantifier 'for any set A of natural numbers', which cannot be expressed in our first order language. The best we can do is use the notion of axiom scheme, so that we effectively have a quantifier in 'for every wf. A(x1) in which xsub>1 occurs free'. Note that such a wf. A(x1) determines a set in any interpretation . . . If we think in the context of a model of N, therefore, each instance of the axiom scheme (N7) corresponds to one particular set. However, there is still an essential difference. The instances of axiom scheme (N7) form a countable set of wfs. of LN. Peano's fifth postulate is a statement about all sets of natural numbers, and the collection of all these is uncountable. Thus (N7) is a much restricted form of the Induction Principle . . .."(p. 114 of A. G. Hamilton 1978 Logic For Mathematicians, Cambridge University Press, Cambrdige UK, ISBN: 0 521 21838.)
Returning to the point however, and agreeing with your initial suggestion that we should have something we can cite, I think adoption of the BBJ formulation is probably the smartest move at this juncture. --æntity 02:52, 12 February 2011 (UTC) — Preceding unsigned comment added by Aentity (talkcontribs)
I think I've lost perspective, having looked down into a can of worms (or into Pandora's box). After a bit of a survey . . . I'd say go with whatever source you think is right and good (as in formally correct). I'm now left with the uncomfortable feeling that the article is pretty deficient. It should be casting Peano's orginal axiom (2nd order) as both the orginal second order axiom (using a formal set of symbols including the successor symbol) and then casting it into its limited form in the first order language via a "schema" (also using a formal set of symbols including the successor symbol). It then should adopt an "arithmetic interpretation" and recast both into the form (using numbers) similar to what you're proposing. In other words there'd be four expressions: two axiom-expressions and two first-order axiom schema-expressions. Bill Wvbailey (talk) 16:57, 12 February 2011 (UTC)[reply]

Deductive and not inductive

Dear Wikipedia contribuitors,

receive a respectful greeting. I would like to know whether any of you can add information about why mathematical induction is a form of deductive reasoning and not inductive one. Some clear explnation and supporting sources would be just perfect. Thank you so much in advance!George Rodney Maruri Game (talk) 21:26, 17 March 2011 (UTC)[reply]

I made this 5 years ago in Bahasa Indonesia http://www.oocities.org/wisnuops/MyDocs/InduksiMatematika_BenarkahAda.pdf Hehehe.... But I also found this http://www.earlham.edu/~peters/courses/logsys/math-ind.htm in English Wisnuops (talk) 05:14, 26 March 2011 (UTC)[reply]


So kind from you. Thank you very much!
George Rodney Maruri Game (talk) 21:18, 28 April 2011 (UTC)[reply]

Equivalent Formulations

It appears to me that the principle of induction is equivalent to the statement that is the minimal element of the collection of all sets satisfying the other Peano's Axioms, ordered by inclusion. (That is, the first axioms detail a list of elements which are in , and the induction principle simply states that there are no others). I'm fairly certain that I have pretty simple proofs of the equivalence of these statements, but I'm skeptical because I don't see it formulated in that way in any reliable sources. Can anyone attest to whether or not these statements are equivalent? 141.213.154.122 (talk) 16:15, 1 August 2011 (UTC)[reply]

Paragraph of Complete Induction

Can anyone check if there is a mistake in the following paragraph quoted from the Complete Induction part of the article.

"This generalization, complete induction, can be derived from the ordinary mathematical induction described above. Suppose P(n) is the statement that we intend to prove by complete induction. Let Q(n) mean P(m) holds for all m such that 0 ≤ m ≤ n. Apply mathematical induction to Q(n). Since Q(0) is just P(0), we have the base case. Now suppose Q(n) is given and we wish to show Q(n+1). Notice that Q(n) is the same as P(0) and P(1) and ... and P(n). The hypothesis of complete induction tells us that this implies P(n+1). If we add P(n+1) to Q(n), we get P(0) and P(1) and ... and P(n) and P(n+1), which is just Q(n+1). So using mathematical induction, we get that Q(n) holds for all natural numbers n. But Q(n) implies P(n), so we have the conclusion of strong induction, namely that P(n) holds for all natural numbers n."

From what I read, I believe that it is the converse that has been shown:If statement P(n) can be proved for all n ≥ n0 by complete induction, it can be proved by ordinary induction. — Preceding unsigned comment added by Ddanndt (talkcontribs) 16:15, 22 September 2011 (UTC)[reply]

Wikipedia doesn't do original research, and right now neither your position nor the current article has a source. Ian.thomson (talk) 19:27, 22 September 2011 (UTC)[reply]

Thanks Ian for your contribution duh !!! If you can read english, I'm just asking Wikipedians to come up with any quotes/info to confirm/refute any mistake made. I'm not asking for research. Thank you and stay off if you don't have any answer... — Preceding unsigned comment added by Ddanndt (talkcontribs) 20:17, 22 September 2011 (UTC)[reply]

We do bring out the banhammer for incivility, treat other editors (including me) with respect. I have given the answer as far as Wikipedia's policy on sticking to verifiable information is concerned. That part of the article does not cite a reliable source, your statement does not cite a source either. From the perspective of WP:V, both statements are original research, which is not accepted here. Ian.thomson (talk) 21:06, 22 September 2011 (UTC)[reply]

---

The question is valid. I could not find this word-usage in my Kleene 1952, so I did a google search and it coughed up "course of values" induction as well as the example given here (yes: the example just "resets" the basis. There's nothing particulary profound in it, really, but see below . . . .) Here's the quote from Kleene 1952:22 relative to his definition of "course-of-values" induction (if someone doesn't know who Stephen Kleene is, for shame: Introduction to Metamathematics, North-Holland Publishing Company, Amsterdam, NY, ISBN 0 7204 2103 9):
"Sometimes, in order to caryy through the induction step, it is necessary to assume as hypothesis of the induction, not simply P(n) but that P(m) for all m <,= n. The reader may satisfy himself that the induction principle is valid in this modification, called a course-of-values induction(Kleene 1952:22).
Altho I don't have a readily-available reference for what Bertrand Russell would have called "matrix induction" or "induction over a matrix", this notion can be applied with some benefit when the function, such as arithmetic addition Σ(m, n) or arithmetic multiplication Π(m, n) has two variables. The idea is to freeze m to a parameter p (say the number "3"), then prove that Σ(3, n) works for all n. Clearly this has "reset" the basis to Σ(3, 0) = 3, and then the induction on the single variable n goes on from there. Bill Wvbailey (talk) 23:17, 22 September 2011 (UTC)[reply]
A bit more on this: Here is Kleene's formal definition and proof verbatim [here ⊃ is logical "implies" cf p. 69, x' is the successor of x cf p. 70]:
"In the first schema *162a, the expressions A(0) and ∀x[∀y(y ≤ x ⊃ A(y)) ⊃ A(x')] formalize the basis and induction step, respectively; in the more compact form *162b, the two are brought together in the single expression ∀x[∀y(y < x ⊃ A(y)) ⊃ A(x)]:
" *162a. ⊢ A(0) & ∀x[∀y(y ≤ x ⊃ A(y)) ⊃ A(x')] ⊃ A(x)]].
" *162b. ⊢ ∀x[∀y(y < x ⊃ A(y)) ⊃ A(x)] ⊃ A(x)
(Courses of values induction)
" PROOFS. *162a. Assume A(0) & ∀x[∀y(y ≤ x ⊃ A(y)) ⊃ A(x')], deduce ∀y(y ≤ x ⊃ A(y)) by simple induction on x, and infer A(x) by ∀-elim.
"Sometimes inductions require a double basis; i.e. we establish as the basis A(0) and A(1), and then for the induction step infer A(x' ') from the two preceding cases A(x) and A(x'). This can be treated formally as a course-of-values induction, using cases acording as x'=1 or x'>1 under the induction step, or we can make it into a simple induction by using A(x) & A(x') as the induction formula. This device of using a conjunction as the induction proposition applies similarly to induction from a k-fold basis for any fixed k ≥ 2, and also to the inductive proof of several propositions simultaneously'." (all quotes Kleene 1952:193)
There's quite a bit more to be found in Kleene cf the index p. 541: course-of-values: function 231, 291; induction 22, 193; recursion 231, 237, 236. Uses of it are sprinkled throughout the text. Bill Wvbailey (talk) 14:58, 23 September 2011 (UTC)[reply]

Complete Induction, Part II: History

This is in response to a question by User:Jowa fan. I've found historical sources re the usage of the words "complete induction" and "course-of-values".

Complete induction appears in Richard Dedekind's The Nature and Meaning of Numbers, the text of which can be found either at www.books.google.com, or from an (identical) printed version:

Richard Dedkind, ca 1858 & 1887/1893, Essays on the Theory of Numbers: Continuity and Irrational Numbers, II. The Nature and Meaning of Numbers, translated by Wooster Woodruff Beman; Dover Publications, Inc., Mineola NY, 1963, republication of the Open Court Publishing Company version in 1901, ISBN 0-486-21010-3.

I actually looked at the German version written in German script and found the word vollständigen used where the English word "complete" is used (the translation apparently is "complete"). Plus the German Wikipedia has an article on Vollständige Induktion. The text of Dedekind of interest here is II. The Nature and Meaning of Numbers. Here's the intuitive synopsis leading to his theorems of "complete induction": In part VI Dedekind defines a "simply infinite" system N in what is now the orthodox method similar to the Peano axioms. This is based on the notions of a "transformation" ϕ of a system S, which is a "law" [function] so that any s contained in S is transformed into ϕ(s); these are abbreviated as s' , for example. In para. 36 he defines the transformation of the system S into itself, i.e. ϕ(S) ℈ S. He defines a "chain" as K' K, i.e. ϕ(K) ℈ K and his most-important Theorem 39: "The transform K' of a chain K is a chain, i.e. (K' )' ℈ K' . Based on all this he expresses his two "Theorems of complete induction":

"Theorem 59. Theorem of complete induction. In order to show that the chain A0 is part of any system Σ -- be this later part of S or not -- it is sufficent to show,
"ρ. that A ℈ Σ, [i.e. A is "a part of" Σ]
"σ. that the transform of every common element of A0 and Σ is likewise element of Σ."
"the preceding theorem, as will be shon later, forms the scientific basis for the form of demonstration known by the name of complete induction (the inference from n to n+1; it can also be state in the following manner [here he uses the idea of a "property" and its inheritance altho he doesn't use that word, cf Russell 1919]. (Dedekind pages 60-62)
"Theorem 80. Theorem of complete induction (inference from n to n' ). In order to show that a theorm holds for all numbers n of a chain mo, it is sufficient to show,
"ρ. that it holds for n=m, and
"σ. that from the validity of the theorem for a number n of the chain mo, its validity for the following number n' always follows.
"This results immediately from the more general theorem (59) or (60). The most frequently occurring case is where m = 1 and therefore mo is the complte number-series N. [pages 80-81. "n' is a "transform" of element n: "the transform n' of a number n is also called the number following n (p. 69)]]

[Off-topic aside: Immediately the pathelogical example of the Collatz conjecture for the looping ϕ(N) = IF(ISODD(N),(5*N+1)/2,N/2) given basis N=5, leapt to mind (as opposed to the diverging ϕ(N) with basis N=7):

5, 13, 33, 83, 208, 104, 52, 26, 13, 33, 83, 208, 104, 52, repeating ad infinitum]

Course-of-values induction: Moving right along: I found what apparently is the source of the "course-of-values" notion that I remember from Bertrand Russell. This notion comes to us via Russell 1903 from Gottlob Frege 1902:

Gottlob Frege, ca 1902 The Basic Laws of Arithmetic: Exposition of the System, Translated and Edited by Montgomery Furth, University of California Press, Berkeley CA, 1964, 1982 edition ISBN 0-520-04761-3

In Furth's introductory notes "Chapter 6: Function and course-of-values. Concept and Extension" he summarises the concept of "course-of-values" as follows:

"The notion of a "course-of-values" is fairly easily related to more recent notations as approximately a function-in-extension . . . Thus the name " ε' (ε + 7) " would denote a class containing such ordered couples as 0;7, 1;8, 2;9." (page xxxvii)

This definition is what I associate with Russell in Principia Mathematica as opposed to his 1903. In his 1903:245-246 he first summarizes Dedekind as I did above, mentions Dedekind's theorem 59, and calls this "the generalized form of mathematical induction". But his notion of "course-of-values" that he uses in PM in regards to his definition of a "matrix" and his axiom of reducibility, is that of Frege.

At least historically it's as if Kleene has confounded the notion of "course-of-values" aka "function-in-extension" with "generalized" or "complete" induction. Am not sure where to go from here. The fact that the German Wikipedia uses the Dedekindian "complete" induction indicates that this might be the preferred moniker. Bill Wvbailey (talk) 22:34, 4 December 2011 (UTC)[reply]

Thanks for following this up! I think what's currently in the article is based on a mistranslation. Certainly the German term Vollständige Induktion described what's just called mathematical induction in English, and the German Wikipedia page Vollständige Induktion uses the name Induktion mit mehreren Vorgängern ("induction with multiple predecessors") to describe what's called "complete induction" on the page here. As for course-of-values, I'm now properly confused. I propose that we retain the term strong induction–I'm confident that there are many English language textbooks using that term, if a source is required–and remover the other names. (What's I'd really like to see is a source for the proof that strong induction is equivalent to ordinary induction. Most books just state it without proof, and it's not hard to verify that it's true, but it would still be nice to have a reference for it.) Jowa fan (talk) 00:04, 5 December 2011 (UTC)[reply]

"Strong" moniker: Do you have any sources that actually use the moniker "strong induction". II did find in Sipser this very-unsatisfying little claim that "having the stronger induction hypothesis that P(j) is true for every ji is useful. The induction proof still works because, when we want to prove that P(i + 1) is true we have already proved that P(j) is true for every ji" (page 23). In the previous paragraph he has said that the basis can start at any arbitrary number "b": "In that case the induction proof shows that P(k) is true for every k that is at least b (page 23). So in this sense the added demonstration that P holds "all the way down" certainly seems 'stronger' than when we start at b and don't know what happens when k < b.

  • Michael Sipser 2006 Introduction to the Theory of Computation, Second Edition, Thomson Course Technology, Boston MA, ISBN-13: 978-0-534-95097-2.

My inquiring mind wants to know: is there a general principle by which we proceed downward from n to the basis? Why not just start at the basis and work up?

Course of values moniker: After more reading of Dedekind I'm thinking that maybe "course-of-values" is a better moniker than "complete", after all. For example, we might want to start at n and assert that it and all m < n satisfies property P. We might be able to verify that 0 satisfies property P, but do we know that n-3, or n-4, or some odd-ball m in n-m satisfies property P until we try it for every m between n and 0, too? The cranking through all the numbers m in P(n) is definitely a "course-of-values". Or is there a general way to prove it all the way down, by some way that may or may not yield to garden-variety induction? Won't there be situations when we have to "test all the cases all the way down" to be sure?

The "complete induction" moniker: As to what is going on with the "complete" moniker, after reading a bit more Dedekind I think you're correct that his two theorems (59) and (80) are actually "garden-variety" mathematical induction. Specifically, in order to get from (59) to (80) Dedekind first has to define "infinite" and "finite" systems (59), then prove that infinite systems exist (66), define the notion of "1" (modern method uses "0") in a "simply infinite system" N that has to satisfy 4 conditions (similar to the conditions that "0" satisfies in modern set theory (71)), define "1" as "the base number" and define the notion of n' as n' "following" n, prove that N is the only number-chain containing the base-number 1 (75-79) and lastly prove Theorem 80 in a single line.

Dedekind's proofs of (59) and (80) don't require the notion of ≤ that is required for and B-B-J's "complete" induction (see below) and Kleene's "course-of-values" induction (on page 22 Kleene leaves this to the reader to verify). In fact Dedekind does not introduce the notion of "less than" until (89), as a definition after proving theorems (81-88). After looking over the rest of Dedekind I couldn't find what we are looking for. So I have to agree with you that there seems to be a misappropriation from the German Vollständige Induktion (i.e. garden-variety mathematical induction) of the moniker "complete"; in fact "complete" would seem to mean that the induction starts at "1" (or "0") and "is complete up through all the numbers between "0" and mn.

A proof-sketch of "complete induction": (I hope I haven't misinterpreted the following from B-B-J). I suspect we're stuck with having to at least cite the moniker "complete induction"; my cc of B-B-J refers to it as "complete induction" on page 213:

  • George S. Boolos, John P. Burgess, Richard C. Jeffrey, 2002, Computability and Logic Fourth Edition, Cambridge University Press, Cambridge UK, ISBN 0-521-00758-5 (pb.),

Here's the B-B-J quote in full because it purports to provide a proof-sketch of how to derive "complete induction" from the garden-variety; note that it's assuming garden-variety induction as an axiom together with 2 of the 9 axioms of weak arithmetic system Q which it has to use:

"Closely related to the principle of mathematical induction as stated above is the prinicple of complete induction, according to which we can prove that every number has some property P by proving that zero has P, and proving that, assuming every number ≤x has P, then the successor of x also has P. Indeed, complete induction for a property P follows on applying mathematical induction to the related property 'every number ≤x has P ', using the facts (Q4) [ x + y' = (x + y)' ] and (Q5) [ x*0 = 0 ] that the only numbers ≤x' are the numbers ≤x and x' itself." (p. 213)

In the paragraphs before their "axioms of minimal arithmetic" Q i.e. axioms Q(1) - Q(9) on pages 207-208 that this Q is "not strong enough to prove major theorems of number theory . . . [but] strong enough to prove all correct ∃- rudimentary sentences. . . . Our main theorems . . . apply to any theory T that contains Q, and since Q is weak, the theorems are correspondingly strong" (p. 207-208); they then graft their "mathematical induction schema" as an axiom onto the list to create their P or "Peano arithmetic" which is adequate for number theory.

Is this B-B-J proof-sketch what you're after? I don't find it satisfying at all (too handwavy/sketchy, too obscured by its requirements for the arithmetic axioms Q).

Conclusions: We're going to have to footnote with sourcing all of these monikers, confusing as they may be. We have sources for two of them, so far. And either there's something I'm missing, or there's been a misappropriation of a moniker, as you propose. But we're stuck with the usage as it is, altho we can relegate the renegade to the footnote cellar. Bill Wvbailey (talk) 17:22, 5 December 2011 (UTC)[reply]

Yes, I'd like to see something like the B-B-J proof but expressed a little more clearly. I'm sure I've seen "strong induction" in several books, typically first-year undergraduate texts with "discrete mathematics" in their name, but would have to make a trip to the library to find a specific source. It's not likely to happen this week. It seems to me that the name "course-of-values" is archaic, unless you have a source published in the last 30 years that uses it. Jowa fan (talk) 00:32, 6 December 2011 (UTC)[reply]

--

Strong induction has modern usage: I typed strong induction into www.books.google.com and got a few hits. The book (below) explains the matter to my statisfaction (never mind the factoid that it prefers "complete" as opposed to "strong"):

" *Because this form of induction employs such a strong assumption, the Principle of Mathematical Induction is sometimes referred to as "weak induction" (footnote * on page 114 to the subsection titled "Principle of Complete Induction (PCI)" in subchapter 2.5 titled "Equivalent Forms of Induction")

The book is:

  • Douglas Smith, Maurice Eggen, Richard St. Andre, 2010, A Transition to Advanced Mathematics Seventh Edition, Brooks/Cole, Cengage Learning, Boston MA, ISBN-13: 978-0-495-56202.

I'd suggest that if we decide that "strong" is the way to go a footnote needs to be added to the point that it is not "stronger" in the sense of "better", its adoption actually weakens the case; the weaker the theory the stronger a proved outcome, Occam's razor applying here. S-E-A give an example similar to the following (theirs being 1 + 2 + 3 + 4 + . . . + 2*n-1 = n2) but I've simplified it to the one I first saw in about 11th grade but never really understood, 'til now:

1 + 2 + 3 + 4 + . . . + k = ?

Altho S-E-A are not clear about (in general) how an hypothesized formula comes about (magical thinking not allowed), in this instance I derived it by graphical inspection of a plot of little squares (looks like a triangle, applied the formula 1/2*b*h, and adjusted for the k/2 extra squares = k*k/2 + k/2, yielding the simpler form):

1 + 2 + 3 + 4 + . . . + k = k*(k+1)/2

So let's hypothesize that for any n, the summation will yield n*(n+1)/2. We start our induction with what we know to be true, i.e. at k: k*(k+1)/2, k being a very finite number (like 5, yielding the sum=15). Then we add k+1 to it (sum = 21, and we see that indeed 6*7/2 = 21). So we do this with algebra:

k*(k+1)/2 + (k+1) = (k + 1)*(k/2 + 1) = (k + 1)*(k + 2)/2 = (k+1)*((k+1) + 1)

To get to our hypothesized equation we reset the equation (k+1)*((k+1)+1)/2 to our hypothesized equation by letting n = (k+1); indeed this yields our hypothesized formula:

n*(n + 1)/2. Q.E.D.

Addendum: I found an source for the use of inductive reasoning (i.e. from specific cases to a probable general case) such as the method described immediately above, in Howard Eves 1990 Foundations and Fundamental Concepts of Mathematics: Third Edition, Dover Publications, Inc, Mineola, NY, ISBN: 0-486-69609-X (pbk.):

"In such reasoning [that the sun will rise tomorrow], the conclusion is not one that must follow but is only one that probably follows. Reasoning of this sort is clearly inadmissible in a logical development. Induction in the usual sense often is employed in order to guess a general proposition P(n) in the first place -- that is P(n) may be conjectured from observing several instances, like P(1), P(2), P(3), of P(n) -- but the rigid establishment of P(n) by the so-called priniciple of mathematical induction . . . is, of course, a purely deductive procedure" (page 189).

Eves doesn't torture us with names for the three alternate types of Mathematical induction he describes: the first one is "the principle of the smallest number", the second one is, "a second principle of mathematical induction" and the third is "a third principle of mathematical induction" (!); both of these later forms are similar to the "strong" form under discussion. He offers proofs of both by means of employing his Theorem 17 "principle of the smallest number". Eve's treatment begins with his "postulate of finite induction", the 10th of 10: "(Postulate 10) If M is a set of natural numbers such that (1) M contains the natural number 1, (2) M contains the natural number k+1 whenever it contains the natural number k, then M contains all the natural numbers" (p. 184). From this he proves his "Theorem 1: the principle of mathematical induction". (Alternately he uses a Peano axiom set (p. 191) with the same "postulate of finite induction"). BillWvbailey (talk) 22:34, 11 December 2011 (UTC)[reply]

Conclusion: I'm okay with "strong" as long as it's discussed that this doesn't mean strong is better. I like the example. I could do a drawing of it to make the point; I think high-school kids do encounter this particular example. Will continue to work on the "course of values" issue. Bill Wvbailey (talk) 21:35, 6 December 2011 (UTC)[reply]

--

Course-of-values induction has modern usage: www.books.google.com gives 5790 hits on [in-quotations] "course-of-values induction", and 13900 hits for "complete induction", mathematics. The later qualification is necessary because "complete induction" has to do with hypnosis, too. Here's what Chausey 2006 has to say about "course-of-values induction":

"This principle is often called the "principle of strong induction" or the "principle of complete induction". However, it is no "stronger" than the original IP [Induction Principle], as will soon be proved. Also, the term "complete" does not seem to have sufficient mnemonic power here. I prefer the name, "course of values induction." When using CVI, one makes the induction hypothesis that φ(i) is true for all i < k, and then one tries to prove φ(k). This is equivalent to assuming
φ(0) & φ(1) & . . . & φ(k-1).
"From this "course-of-values" one then tries to prove φ(k). If this is accomplished then one infers that for all n ε Nat, φ(n)." (p. 303)

Chausey then provides a reductio proof using well-ordering. I don't like this proof nor his discussion; for the reason, see Collatz example, below.

  • Robert L. Causey, 2006, Logic, Sets and Recursion Second Edition, Jones and Barthlett Publishers Inc, Sudbury MA, ISBN-13: 978-0-7637-3784-9.

Another interesting application of "course-of-values induction" is John V. Tucker and Jeffrey L Zucker, 1988, "Program Correctness over Abstract Data Types , with Error semantics", Chapter 4.5 "Course-of-values Induciton" (p. 155). Also McNaugton 1993, Huth and Ryan 2006, etc etc.

Misapplication of complete/strong/course-of-values Induction: This form of induction seems very susceptible to misuse. Causey hints at it (with his sequence of logical &, above) but doesn't drive the point home. Somehow, every case from 0 (or 1) to k-1 must be demonstrated as true, perhaps by use of an exhaustive course-of-values, extensional, one-by-one demonstration. If not, mistakes will be made: Here's an example from the Collatz conjecture using -3 as coefficient i.e. =IF(ISODD(-3*N+1)/2, N/2). Suppose someone were to (naively) assert that for N≤15 the sequence always arrives at 1. Maybe they tried N=1, 2, 3, 4, 5, 6, 7 and decided that was good enough demonstration. Then they demonstrate that the sequence does arrive at 1 with seed 15 (as indeed it does). And they derive from this that it arrives at 1 with seeds N=16 and N=17 (i.e. the case k+1, k+2; as indeed it does). From this they then conclude that the Collatz sequence always converges for any N. Well they would be quite wrong. For the singular, oddball case N=13, the only N between 1 and 15 inclusive (actually the only bad actor between 1 and 22 inclusive), it does not converge; it gets into a loop (also a loop when N=23). Russell 1903 mentions some misuses of mathematical induction (p. 192, p. 196).

Conclusion: Chausey demonstrates the confusion I've had over the use of "strong", even responding to the word as I did: "Why should it be strong as in "more powerful"?" Unfortunately we'll have to respect these various monikers; there doesn't appear to be a "best" one. BillWvbailey (talk) 17:54, 7 December 2011 (UTC)[reply]

Paragraph of Complete Induction (from RfC request board)

The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.


Can anyone check if there is a mistake in the following paragraph quoted from the Complete Induction part of the article.

"This generalization, complete induction, can be derived from the ordinary mathematical induction described above. Suppose P(n) is the statement that we intend to prove by complete induction. Let Q(n) mean P(m) holds for all m such that 0 ≤ m ≤ n. Apply mathematical induction to Q(n). Since Q(0) is just P(0), we have the base case. Now suppose Q(n) is given and we wish to show Q(n+1). Notice that Q(n) is the same as P(0) and P(1) and ... and P(n). The hypothesis of complete induction tells us that this implies P(n+1). If we add P(n+1) to Q(n), we get P(0) and P(1) and ... and P(n) and P(n+1), which is just Q(n+1). So using mathematical induction, we get that Q(n) holds for all natural numbers n. But Q(n) implies P(n), so we have the conclusion of strong induction, namely that P(n) holds for all natural numbers n."

From what I read, I believe that it is the converse that has been shown:If statement P(n) can be proved for all n ≥ n0 by complete induction, it can be proved by ordinary induction.— Preceding unsigned comment added by Ddanndt (talkcontribs)

In the future, please state which article you're refering to. I can only guess that you're talking about the articleMathematical induction because that's the only other place you've editted. Wikipedia is supposed to go off ofsources, not original research. Right now, neither your position nor the current article has a source saying anything either way. Ian.thomson (talk) 19:27, 22 September 2011 (UTC)[reply]

Then remove it if then if it's veracity can't be confirmed??? Ddanndt (talk) 20:23, 22 September 2011 (UTC)[reply]

Ideally, it would be better to find a reliable source which clarifies which is the case. A college text book orGoogle books may have something. Ian.thomson (talk) 21:09, 22 September 2011 (UTC)[reply]

Thread moved by User:Coastside on 08:47, 15 June 2012 (UTC) from WP:Requests_for_comment/Request_board If issues is still open, please consider:[reply]

The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Coastside (talk) 08:47, 15 June 2012 (UTC)[reply]

Natural numbers only?

To the best of my knowledge, I thought mathematical induction applied to all real numbers. Is there a specific type of mathematical proof that involves the entire set of real numbers (maybe even imaginary)? Eddievhfan1984 (talk) 01:22, 17 April 2013 (UTC)[reply]

Explain relation between Peano's induction axiom and exclusion of nonstandard models in article

Shows an infinite chain of (light wood) domino pieces and a circle of (dark wood) pieces. If the first light piece is overthrown, each light piece will eventually fall, while no dark piece will be affected. The shown configuration illustrates a model of Peano's axioms for natural numbers, except for the induction axiom. The latter requires all pieces to fall if the first one is overthrown.
Nonminimal model of natural number axioms, except for induction axiom

I'd like to suggest to explain the need of the induction axiom to exclude nonstandard models, as already discussed above, in the article. There could be a second domino picture (again by User:Pokipsy76, who provided the -great- first one?), showing a model with junk elements; see the image for an ugly sketch of the domino footprints (adding a single junk element that is its own successor, as suggested by User:Lambiam 22:18, 26 February 2008, is more difficult to illustrate).

If the first domino falls, then

  • in the standard model, all dominoes will fall, as explained in the article;
  • in the nonstandard model, the extra ("junk") dominoes won't fall.

Moreover, the induction axiom holds in a model if, and only if, the model doesn't contain any "junk", i.e. elements unreachable from 0 by successive incrementation. To rule-out nonstandard models containing "junk" is the main purpose of the induction axiom.

Jochen Burghardt (talk) 20:56, 21 May 2013 (UTC)[reply]

I've achieved to produce the suggested image myself (see above). If there are no objections, I'd like to insert it with appropriate text (the current caption text may not be optimal) into the article. - Jochen Burghardt (talk) 17:01, 30 October 2013 (UTC)[reply]
Well, there's some good stuff here if the text is worded carefully.
But I see a couple of problems with your proposed additions in the way that you have worded it. First of all, I'm not particularly favorable towards identifying a model-theoretic consideration as the "main purpose" of an axiom. For someone who takes it for granted that the natural numbers are a real, unique structure, directly accessible to intuition rather than defined by axioms, the "purpose" of the axiom is simply that it helps you prove things about the intended structure (and is intuitively obviously true in that structure, so the things that you prove will also be true). While this view is not the only one, it's an important enough one that anything we say about the "purpose" of an axiom should not be inconsistent with it.
Then there's another point, which is that, whatever the "purpose", the first-order axiom does not in fact succeed in excluding all nonstandard models. (The second-order axiom does succeed in excluding them, when interpreted in full second-order logic, but there is no complete and sound proof system for second-order logic.) --Trovatore (talk) 19:44, 30 October 2013 (UTC)[reply]
Agreeing to Trovatore, in order not to burden this article with model-theoretic stuff, I now added the picture to Peano_axioms#The_axioms, where I think it fits nicely. However, the current caption text is still rather clumsy, taking more space than the thumbnail image itself in my browser. Maybe a native English speaker could improve it. (The issue of 2nd-order axiom vs. 1st-order axioms is already discussed under Peano_axioms#Models, so it need not be mentiopned in the caption; another advantage of the new site.) - Jochen Burghardt (talk) 13:50, 8 November 2013 (UTC)[reply]

Citation for Polya's equal colors

The page http://www.proofwiki.org/wiki/All_Horses_are_the_Same_Colour#History says: "This paradox was originally raised by George Pólya, and appears in his 1954 work Induction and Analogy in Mathematics." This book is available via https://archive.org/details/Induction_And_Analogy_In_Mathematics_1_], and No.17 (p.120 = p.140 in pdf) of "Examples and Comments on Chapter VII" (starting at p.116 = p.136 in pdf) is about a proof of "Any n girls have eyes of the same color". I didn't find the horse version there. - Jochen Burghardt (talk) 14:47, 6 January 2014 (UTC)[reply]

cousin - definition

I recall a bit of mathematical folklore regarding an inductive definition of a cousin - in particular nth cousin m-times removed. It begins with n=-1: -1th cousin m times removed is a m-times child (e.g. -1th cousin 0 times removed is oneself). Does anyone know of a verifiable statement of this definition (including, of course, a correct statement of the inductive step, which escapes me at the moment). TomS TDotO (talk) 12:10, 18 September 2014 (UTC)[reply]

"domino effect"

Please don't add unreferenced analogies. TThis is original research. ".. as an analogy doesn't need an own reference" - wrong. Analogy needs reference: wikipedia policies does not allow to conclude the correctness of the analogy without reference. -M.Altenmann >t 16:37, 19 February 2015 (UTC)[reply]

  • I didn't add this analogy in the first place, it was already present on this page before I started as a wikipedia editor. I found (and still find) it great, and made another, similar, picture later.
  • Trying to verify your above claims, I read through WP:SYNTH which you had given in the edit summaries. The examples there are from the political and the legal area, are hardly applicable to mathematics. In contrast, I found a link to WP:What SYNTH is not, where the sections SYNTH is not explanation, SYNTH is not ubiquitous (it would be, for math articles, if you were right), SYNTH is not obvious II, and SYNTH is not unpublishably unoriginal seem to support my opinion. — So please give a source for your interpretation of wikipedia policies, or undo the removal of picture and in-text explanation.
  • I think deletion of article parts should be the last means, and found WP:PRESERVE to support this. So even if the domino analogy would need an own reference, you better had tagged it with {{citation needed}} and awaited the reactions, rather than delete it immediately. - Jochen Burghardt (talk) 21:16, 19 February 2015 (UTC)[reply]
Btw: I just found the same analogy in German wikipedia at de:Vollständige Induktion, also without source. The picture is still used on 26 non-user pages across all wikis, see commons:File:Dominoeffect.png#File usage on other wikis; almost all of them that I could guess from the title are about induction. I checked a few European languages, and found many mentioning something that could be a translation of domino effect (guessed from similarity to picture caption), but didn't find a source anywhere. - Jochen Burghardt (talk) 22:06, 19 February 2015 (UTC)[reply]

The analogy is not a simple explanation. An analogy may be false. An analogy may be restricted to certain circumstances. There are many other problems with analogies. An analogy must be proven to be correct. In wikipedia it is done only by references. The fact this analogy is used in other language wikipedias is irrelevant. Every wikipedia has its own policies.

The analogy is poor: in order to ensure that all N dominoes fall, you have to prove the "toppling" for every individual pair of neighbors; whereas the trick of induction is dramatically opposite. (I know how to "cure" the description, but this is beside the point.)

Furthermore, the description of the analogy is sloppy: the text "proves" that all dominoes will fall, i.e., it is written as an application of the induction rather than an explanation. This is a perfect example of the problems with original research: if the text had references, we could have verified what actually the source said. In our case the only way is bickering among wikipedians, i.e., more original research. Therefore, please follow wkipedia policies: when requested to provide a reference, there is no way you can wiggle yourself out of this without providing a reference. -M.Altenmann >t 19:44, 22 February 2015 (UTC)[reply]

P.S. Wikipedia policy says that the burden of proof is on the person who wishes to preserve the statement. Unreferenced contested text may be deleted at any time. Out of courtecy, I place the tag, to give you time to find references. I also tagged a similar domino example in "Peano axioms". If not referenced in reasonable time, they will be deleted. -M.Altenmann >t 20:12, 22 February 2015 (UTC)[reply]

It is your turn to give a reference for your claims. Which wikipedia policy implies that the analogy needs a reference? I had guessed from your edit summaries that you meant WP:SYNTH, and established above that you were wrong here; apparently you don't object (that's why I undid your deletions). So please give me a link to the policy page you have in mind now. (Apparently, it is a policy that is solely found in the English wikipedia, but not e.g. in the German - ?) Before that, it doesn't make sense to discuss the analogy itself. - Jochen Burghardt (talk) 09:59, 24 February 2015 (UTC)[reply]
No we did not establish that I am wrong. I am surprized you are not familiar with the most fundamental policy, which trumps it all, so I thought you are doing what is called WP:Wikilawyering; for these thoughts I apologize. All statements in en: wp need reference. WP:VERIFIABILITY: "All material in Wikipedia mainspace, including everything in articles, lists and captions, must be verifiable. All quotations, and any material whose verifiability has been challenged or is likely to be challenged, must include an inline citation that directly supports the material." See also WP:CITE, WP:RS. -M.Altenmann >t 15:51, 24 February 2015 (UTC)[reply]

Burghardt's argument ignores the simple fact that unreferenced posts can be removed. There have been numerous Wikipedia edit wars in which people say, in effect, I don't need a reference, you need a reference. No, the burden of proof is on the person making the post, not on the person removing the post. Burghardt's post is unreferneced. If, as he claims, a finite but unbounded number of induction steps suffices, he should be able to cite such a proof in the published literature. Failing that, he should not insist on posting unreferenced material. Rick Norwood (talk) 12:57, 23 February 2017 (UTC)[reply]

Horse color example

I was taught a slightly different version of that example, viz. (Boldface indicating deviations from current article text):

  • Basis: we don't care about that.
  • Induction step: Assume as induction hypothesis that within any set of n horses, there is only one color. Now look at any set of n + 1 horses. Number them: 1, 2, 3, ..., n, n + 1. Consider the sets {1, 2, 3, ..., n} and {2, 3, 4, ..., n + 1}. Each is a set of only n horses, therefore within each there is only one color. But if n is sufficiently large, the two sets overlap, so there must be only one color among all n + 1 horses.

In this version, the step argument is valid, but there is no base case.

Anyway, the article should give examples for issues, and it would be boring to reuse the horse color example again. What about e.g. P(n) being n=n+1, with the inductive step proven by adding 1 on both sides of the induction hypothesis equation? - Jochen Burghardt (talk) 21:53, 19 February 2015 (UTC)[reply]

Colleague, you are engaging in unreferenced research again. If you were "taught", please provide a reference. Wikipedia does not rely on the memory of wikipedians who were taught something. Not to say, that you were taught not the original version. If you know other published examples of erroneous application of induction, you are welcome to cite them. -M.Altenmann >t 19:49, 22 February 2015 (UTC)[reply]

A single-line summary at the beginning of each article...

A single-line summary at the beginning of each article would be of great benefit to the reader. Respect. — Preceding unsigned comment added by 46.233.112.79 (talk) 16:34, 26 December 2015 (UTC)[reply]

Prefix induction

Today, I happend to read the section Mathematical_induction#Prefix induction, which mainly talks about issues in computational complexity theory. For the latter reason, I think it better belongs to an article about recursion. In mathematical induction, it is pointless to count the "number of applications" of the induction step that is needed to arrive at P(n); there isn't even a notion like "applying the induction step". In analyzing the computational complexity of a recursive program, on the other hand, the number of recursive calls does matter. - Any suggestions on what to do with this section are welcome. - Jochen Burghardt (talk) 10:54, 11 April 2017 (UTC)[reply]

Logical and mathematical induction

Commenting on this edit and the subsequent revert.

Jochen's edit summary, when reverting, said that the "distinction between mathematical and philosophical induction should be mentioned", with which I agree. But I think it is already mentioned; indeed, the paragraph starts with an explanation that mathematical induction is not the same as inductive reasoning. I have to say that 2600:387:3:805:0:0:0:AF's version reads cleaner to me, and the version to which Jochen reverted strikes me as a bit repetitive. --Trovatore (talk) 07:57, 28 September 2017 (UTC)[reply]

This was my change. Not only is the distinction mentioned, but as I wrote in my edit comment, mathematics uses inductive reasoning constantly at all levels. This page is clearly designed to be useful to students learning what it is to do a mathematical proof, and it's disingenuous to communicate to them that the art of proof is a purely deductive one. - ‎ 2601:143:8201:1f52:e460:7d69:aa55:5ffc, 28 September 2017

My yesterday revert was too hasty, please apologize. I should have read the surrounding paragraph text and recognize the redundancy, but I didn't. - Concerning the use of inductive reasoning, I agree with by 2600:387:3:805:0:0:0:AF that find a proof usually does require it. The motivation for my revert was that teh article should be clear about the complete absence of inductive reasoning in a completed mathematical proof. The latter point is made clear by the current version, but the former is not. I agree with 2600:387:3:805:0:0:0:AF it should be explained in some wikipedia article. Should we add a footnote here, or would that be too distracting? - Jochen Burghardt (talk) 11:50, 29 September 2017 (UTC)[reply]

I respectfully disagree that a completed mathematical proof is marked by the ``complete absence of inductive reasoning -- if a proof requires it, as you say, it's impossible to excise. The extent to which a proof is complete-in-itself or embedded within a culture is a significant sociological question dating back to at least Russell and Whitehead. My opinion is that it's not really in the interest of this article to address it, especially since there are already links to inductive reasoning and the problem of induction.