Talk:Presburger arithmetic

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Mathematics (Rated Start-class, Low-priority)
WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Mathematics rating:
Start Class
Low Priority
 Field: Foundations, logic, and set theory
WikiProject Computer science (Rated Start-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Start-Class article Start  This article has been rated as Start-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
 

Early comments[edit]

has at least a runtime of 2^(2^(cn)) for some constant c. Here, n is the length of the Presburger statement. Hence, the problem is one of the few that provably need more than polynomial run time.

You mean more than polynomial or more than exponential ? --Taw

Both. But "polynomial time" P is a pretty important class of problems, and this is one of the few problems that can be shown not to be in there, so that's why I explicitly mentioned polynomial time. --AxelBoldt

Indeed, it is a much stronger statement -- it is not even solvable in singly exponential time. --vkuncak

I know just a little of mathematics but I don't see clearly the meaning of the last theorem... ∀ x ∀ y : ( (? z : x + z = y + 1) ? (∀ z : ¬ (((1 + y) + 1) + z = x) ) ) May be the last part of it is " = z + x " instead of " + z = x ". --jimmer_lactic

Working from the inside out of (∀ z : ¬ (((1 + y) + 1) + z = x) ):
  1. ((1 + y) + 1) means y+2;
  2. (((1 + y) + 1) + z = x) means that z (a non-negative integer) can be added to y+2 to make x;
  3. (∀ z : ¬ (((1 + y) + 1) + z = x) ) means that for any non-negative integer, it cannot be added to y+2 to make x, so y+2 must be strictly greater than x. --Henrygb 23:28, 31 Jan 2005 (UTC)
The confusion actually arises when you don't notice that the two z's are separately quantified. The z on the left of implication is not the same as the z on the right. -- jaywalker
I can see jaywalkers point, but why would this then mean 'This sentence states that if x≤y+1, then y+2>x.'? I'd say it means, 'if x+A=y+1 then not(y+2+B=x)', where A and B are the respective z's. Of course this cannot be proven, since its not a true statement, at least over the naturals. Have I got something severly wrong here? --Wintifax
Sorry, I just got it, it actually does mean what it says it should mean and of course the statement is true. So, if Presburger is decidalbe, why can this result not be shown? I still think something's wrong here. -- Wintifax


"and in fact completeness for it is false; this is the content of Gödel's incompleteness theorem."  : I don't think this is quite true. The latter theorem states only that the axioms of Peano arithmetic are either inconsistent or incomplete. It would be nice if we knew that the axioms were consistent (and therefore that arithmetic was incomplete), but there is still a possibility of inconsistency. Disclaimer: I am not a logician. -Mike

Presburger arithmetic is indeed an example of a complete theory and the theory that is week enough so that Goedel's incompleteness theorem does not apply. This is closely related to the decidability of Presburger arithmetic and is what makes it interesting.

john!!!! i cant work out how to send you mail. how are you??????? —Preceding unsigned comment added by Derekoppen (talkcontribs) 06:00, 8 January 2009 (UTC)

On not requiring exponential run time[edit]

As it turns out, it doesn't require exponential run time to decide a proposition expressed in Presberger arithmetic. The classic proof is correct, but illusory.

First, in G. Nelson and D. C. Oppen, "Fast decision procedures based on congruence closure". J. ACM, 27(2):356-364, Apr. 1980., Oppen and Nelson describe an automatic theorem prover which uses the simplex algorithm to decide the arithmetic portion of the problem. (This approach is the basis of about five proof-of-correctness systems, beginning with the Stanford Pascal Verifier and continuing though to Microsoft's new Spec# system.)

The simplex algorithm does indeed have exponential worst-case run time. But the average run time is far better. In fact, exponential run time is only observed for specially constructed cases designed to force the algorithm to visit every vertex of the polytope.

For a deterministic algorithm, such a specially constructed worst case is possible. But if the algorthm is made non-deterministic, so that the order in which the vertices are explored has some randomness, it is no longer possible to reliably force the pathological case. There's a whole family of NP-hard problems that yield to this approach.

The classic proof assumes determinism, but most writings on the subject don't mention that.

On a side note, Presberger arithmetic can be extended to include multiplication by constants (it's just multiple additions, right?), which makes it much more useful for proof of correctness work. Most subscript calculations then fall within the region of decidable problems. --Nagle 22:17, 13 March 2006 (UTC)

Simplex algorithm in the classical Nelson--Oppen and other theorem provers does not solve “extended Presburger arithmetic”. It solves only the quantifier-free case (although extended with <=). In fact, less than that, because arbitrary propositional conjunctions are not considered: It can decide whether a conjunction of equalities (=), ineqaulity (<=) and disequalities (!=) of terms with addition is satisfiable. If you combine simplex, a decision procedure for equality, and a SAT solver, you obtain an exponential algorithm, but still only for quantifier-free theory of addition with inequalities. Algorithms for Presburger arithmetic can decide formulas with quantifiers, and this is what causes the superexponential complexity. However, as Bundy notes, the quantifier-free subcase is sufficient for program verification (A survey of automated deduction, p. 9).
BTW, you probably meant to cite Nelson--Oppen's A simplifier based on efficient decision algorithms, Proc. 5th ACM SIGACT-SIGPLAN POPL, 1978. The reference you have given (Fast decision procedures based on congruence closure) deals only with equality and LISP lists.
--158.195.89.69 17:00, 4 July 2006 (UTC) Jan Kluka (first name dot last name at gmail dot com)
You're right; I cited the wrong paper. And, of course, if you extend Presburger arithmetic to incldue nested quantifiers, the complexity goes up. --John Nagle 17:10, 4 July 2006 (UTC)
Well, Presburger arithmetic is defined to include nested quantifier, it is not an extension (the example at the end of the first section correctly points that out by containing some quantifiers). People usually say linear arithmetic if they mean the quantifier-free subset of Presburger arithmetic (e.g. Detlefs et al. Simplify: a theorem prover for program checking, p. 366). --158.195.89.69 17:31, 4 July 2006 (UTC) Jan

Rewrite by anon[edit]

Some anon just rewrote most of the article, not too badly. Please check that work. Thanks. --John Nagle 05:22, 13 July 2006 (UTC)

simplex[edit]

I do not understand why simplex has anything to do with Presburger. Presburger is for natural numbers, whereas simplex is for the reals. Simplex might be used inside an algorithm such as branch & bound to solve the ILP part, but this is not what is written in the page.

-- I added some references to explain this. As discussed above, for practical applications we look often at quantifier-free fragment as a special case, and this is typically solved in SMT solvers through a combination of a SAT solver and integer linear arithmetic solver. Integer linear arithmetic solvers use Simplex to solve a relaxation of the problem and then additional rules of integer linear programming to obtain completeness for integers. --vkuncak — Preceding unsigned comment added by Vkuncak (talkcontribs) 11:19, 1 March 2015 (UTC)

induction[edit]

Shouldn't there be some explicit mention of quantifiers in the induction axiom?

(P(0) AND (P(x)->P(x+1))->P(x)

say P is "is even." Now P(0)->P(1) is false so the axiom is vacuously true for x=0. P(0) AND P(1)->P(2) is true, but P(1) is false, hence the axiom fails. I think one wants two variables. Thoughts? MotherFunctor (talk) 17:17, 27 November 2007 (UTC)

Yes, the third x should be a y or some other variable. — Carl (CBM · talk) 17:47, 27 November 2007 (UTC)

Decision using automata etc.[edit]

We should discuss the various decision procedures:

  • Omega
  • Wolper et al.'s methods using finite automata (for instance, the LIRA tool). David.Monniaux (talk) 07:24, 10 December 2007 (UTC)

Language[edit]

I removed phrases like 'invokes an object constant' in favor of what I feel to be more standard expressions; also -- equality: is it a nonlogical constant, therefore a predicate of the language of Presburger arithmetic (as the original revision had it) OR is the background logic 'first order logic with identity' in which case equality is NOT just another predicate. I favor the latter. Zero sharp (talk) 21:16, 7 April 2008 (UTC)

moved from article[edit]

"The Wikipedia reference is slightly incorrect. The upper bound grows exponentially by the size of the problem, so is infinitely large, not bounded as suggested by Wikipedia. It is, Oppen believes, the largest bounded problem thus shown" added by user User:Derekoppen Zero sharp (talk) 06:27, 1 June 2008 (UTC)


self-reference[edit]

Could someone advise on the propriety of one putting references to one's own work in a WP article (Derek C. Oppen as user 'Derekoppen' added a reference to one of his own journal articles). Thanks. Zero sharp (talk) 18:30, 2 June 2008 (UTC)

See WP:COS and WP:COI. One should be cautious with such edits, but ultimately the standard is the same as if anybody else inserted the reference: if it's relevant, notable, and properly sourced, it's OK. In this particular case[1], I think the edit was entirely appropriate, except that it was somewhat misplaced (that's why I moved it and slightly reformulated). Notice also that the reference to Oppen and Nelson (1980) was added by someone else. — EJ (talk) 10:02, 3 June 2008 (UTC)
Understood, thanks very much. Zero sharp (talk) 14:52, 3 June 2008 (UTC)

Multiplication leads to undecidability?[edit]

These two statments:

Generally, any number concept leading to multiplication cannot be defined in Presburger arithmetic since that leads to incompleteness and undecidability.
[...]
Peano arithmetic, which is Presburger arithmetic augmented with multiplication, cannot be decidable as a consequence of the negative answer to the Entscheidungsproblem. By Gödel's incompleteness theorem, Peano arithmetic is incomplete and its consistency is not internally provable.

would seem to imply that it is the introduction of multiplication into Presburger arithmetic that leads to undecidability; a) is that what this is saying and b) is that in fact the case and c) can we get an inline reference to support it. Thanks. Zero sharp (talk) 06:50, 4 December 2008 (UTC)

Well, it's tricky to talk about causality here, but with the obvious caveats on that score: (a) yes, (b) yes, (c) I don't know. --Trovatore (talk) 06:52, 4 December 2008 (UTC)
I guess what I'm after is a necessary/sufficient kind of distinction - i.e. _are_ there decidable theories with multiplication? The article on Robinson arithmetic makes a point that you can't blame induction for undecidability. Anyway thanks! Zero sharp (talk) 16:43, 4 December 2008 (UTC)
What do you mean "with" multiplication? You could do Presburger arithmetic, but call the operation "multiplication" instead of "addition", and it would still be decidable. You could even give that theory an interpretation in which the operation would be interpreted as standard multiplication. What are you getting at here? --Trovatore (talk) 17:51, 4 December 2008 (UTC)
I mean whatever the author of the statement "any number concept leading to multiplication ... leads to incompleteness" means. I know very well that you can trivially rename the operation, that's _not_ what I mean. Zero sharp (talk) 22:14, 4 December 2008 (UTC)
0#, I didn't mean to be short with you. I just didn't know what you were getting at; it didn't appear to be well-defined, and I gave an example of why. EJ's answer does indeed seem to be a reasonable take on it. --Trovatore (talk) 23:04, 4 December 2008 (UTC)


You can do that, and you can also find more natural examples of decidable theories with multiplication, such as the theories of real-closed fields or algebraically closed fields. It is essential for the undecidability results that the operation satisfies some basic properties which ensure that it "behaves like" multiplication on the natural numbers to some extent, and not a completely different multiplication operation. I think it is quite reasonable to simply declare Robinson's arithmetic as a definition of what "behaves like multiplication on the natural numbers" means. At any rate, for all practical purposes it is true that anything resembling natural numbers including multiplication is undecidable. — Emil J. 18:19, 4 December 2008 (UTC)
OK Thank you, that's a lot clearer and more helpful than Trovatore's "answer" Zero sharp (talk) 22:15, 4 December 2008 (UTC)
Multiplication by constants does not introduce undecidability. It's multiplication by variables which does. Systems of linear equations are decidable. --John Nagle (talk) 06:51, 5 December 2008 (UTC)
A sufficient condition for Goedel's theorem to apply is that the diagonal lemma applies to the theory in question, at least the case of the diagonal lemma that is used in Goedel's theorem.
One way to obtain a decidable theory "including multiplication" is to start with Robinson arithmetic and remove either axiom 6 or axiom 7. The resulting weaker theory will still be satisfied by the standard model N and still includes multiplication, but doesn't prove enough about its multiplication operation to be udecidable.
The sentence about induction in the Robinson arithmetic article is somewhat misleading. Of course the entire infinite axiom scheme of induction is not required of PA to prove PA is incomplete; Robinson arithmetic includes just those consequences of the induction scheme that are needed in the proof the the diagonal lemma. So the remaining instances of induction, which are not included in Robinson arithmetic, are superfluous anyway. — Carl (CBM · talk) 13:55, 5 December 2008 (UTC)
You are wrong, the theory you defined is still undecidable. Any theory which can be consistently extended by Robinson's arithmetic (for example, the theory with no axioms in the language of arithmetic) is undecidable, because Robinson's arithmetic is finitely axiomatizable. In particular, any theory valid in the standard model of arithmetic is undecidable (i.e., Th(N), and any consistent extension of Q for that matter, is hereditarily undecidable). Undecidability really has nothing to do with induction. Nor does it have anything to do with diagonal lemma, undecidability of extensions of Q only relies on -completeness of Q, i.e., its ability to evaluate arithmetic terms and bounded quantifiers. — Emil J. 14:27, 5 December 2008 (UTC)
You're right, I wasn't even thinking, just scanning the Robinson arithmetic article, and thinking about incompleteness instead of undecidability. I only brought up the diagonal lemma in the context of Goedel's theorem; of course it doesn't relate to undecidability. I'll strike out the nonsense and leave the parts related to incompleteness.
And thanks for fixing the Robinson arithmetic article. It looks like the bad material was added by some IP address from the U. of Canterbury (NZ) a long time ago. Sigh. — Carl (CBM · talk) 14:43, 5 December 2008 (UTC)
As EmilJ correctly points out, any theory is undecidable if it is (1) in the language of arithmetic and (2) satisfied by the standard model of arithmetic. This includes the empty theory in the language of arithmetic. The source of undecidability is the presence of two binary relations in the language and the ability to interpret those relations as addition and multiplication. From one point of view, it isn't the strength of a theory (in such a language) that makes it undecidable, it's the weakness of the theory in the sense that the theory does not rule out the natural numbers as a model. — Carl (CBM · talk) 14:50, 5 December 2008 (UTC)
Also, I had forgotten about the article Decidability (logic), which has lists of very general classes of decidable and undecidable theories. At some point that article should be expanded some to discuss the situation with theories in the language of arithmetic. — Carl (CBM · talk) 15:13, 5 December 2008 (UTC)
Incidentally, it's possible to have decidable systems with multiplication if arithmetic is modular or bounded, as on computers. Undecidability requires infinities. --John Nagle (talk) 17:45, 5 December 2008 (UTC)

Unclear assertion[edit]

In the Properties section, the following statement has scope ambiguity:

Mojżesz Presburger proved Presburger arithmetic to be:
...
  • decidable: There exists an algorithm which decides whether any given statement in Presburger arithmetic is true or false.


Does this mean 1 or 2?:

  1. For any statement S in Presburger arithmetic, there is some algorithm A such that (A decides whether S is true or false)
    • i.e., all P-statements are decidable
  2. There is an algorithm A such that for any statement S in Presburger arithmetic (A decides whether S is true or false)
    • i.e., P-arithmetic has some "master" decision algorithm


(Sorry for not using <math> notation, but I'm not a mathematician and not familiar with that part of Wikicode.) After writing them out that way, I think it's pretty clear that #1 is what's meant, but the English is ambiguous and should be cleaned up. --Thnidu (talk) 14:42, 22 May 2009 (UTC)

No, #2 is the correct meaning. #1 is nonsense, as it is trivially satisfied for any statement in any theory: one of the two algorithms "return YES" and "return NO" always works. I do not see any ambiguity in the formulation, it clearly starts with an existential quantifier, and in any case, there is a link to decidability (logic) which explains it in detail. You just have to read carefully. — Emil J. 15:20, 22 May 2009 (UTC)

Exclusive OR[edit]

The article gives a nice example given for something that is provable in Presburger Arithmetic:

for all x, there exists y : (y + y = x) ∨ (y + y + 1 = x). This states that every number is either even or odd.

But the ∨ symbol is inclusive-or. Don't we want exclusive-or here? If there's no standard exclusive-or symbol, it could be stated with a conjunction (∧). — Preceding unsigned comment added by Iconjack (talkcontribs) 16:53, 24 November 2013 (UTC)

  • Actully, the ordinary disjunction is perfectly fine here. It is a a true statement as written. Vkuncak (talk) 11:26, 1 March 2015 (UTC)

Redundant (?) and confusing phrase[edit]

Hi, IT guy here, text in axioms say "the axioms of Presburger arithmetic are the universal closures of the following:". Well, I don't know what 'universal closure' means here but it directs you to universal quantification, so why not say 'universally quantified'? Even better, since that quantification can be reasonably be taken to be implicit, why not just say "the axioms of Presburger arithmetic are:".

thanks 78.149.128.58 (talk) 11:07, 14 May 2016 (UTC)

The problem is rule 5, which speaks about an arbitrary predicate P with arbitrary free variables beyond x. For example "P(x)" could be "(z+y)+x=z+(y+x)"; in this case, rule 5 provides an axiom to prove associativity of "+" by induction over x, viz. "∀zy ((z+y)+0=z+(y+0) ∧ ∀x ((z+y)+x=z+(y+x) → (z+y)+(x+1)=z+(y+(x+1))) → ∀x ((z+y)+x=z+(y+x)))".
This is usually read as follows:
  • For arbitrary y and z:
  • Show that (z+y)+0=z+(y+0) holds;
  • and show that (z+y)+x=z+(y+x) implies (z+y)+(x+1)=z+(y+(x+1)),
  • then you are guaranted that (z+y)+x=z+(y+x) for each x.
So, due to the need to handle arbitrary complex predicates P, you can't get rid of the notion of "universal closure", unless you assume it "be taken to be implicit", which imho is less clear to non-logicians. - Jochen Burghardt (talk) 17:02, 14 May 2016 (UTC)
Hi Jochen, I accept your point about using universal quantification rather than omitting it as I suggested, but I would question the term closure. I'm familiar with quantification but calling it Closure as is apparently done here did and still does puzzle me - does it mean something more than (universal) quantification? If it's just a synonym for univ. quantification, this latter phrase might be better. Or it may not. I'll leave it for you to choose. Anyway, thanks for your answers here and in the wiki pages on quantification too (that was me as well). Much appreciated. 78.149.131.131 (talk) 18:40, 30 May 2016 (UTC)

Informally, building the "universal closure" of a formula means to put just as many universal quantifiers in front of it as are needed to bind all its free variables. E.g. the universal closure of x+y=y+x needs two quantifiers (viz. ∀x∀y), but that of (x+y)+z=x+(y+z) needs three (viz. ∀x∀y∀z). Maybe, the distinction between characters and strings in the C programming language is a good analogy: although chars are used to build strings, both notions aren't synonyms; you can't say how much chars are needed for a string (like argv[0]) since this number varies. Similarly, you can't say how many quantifiers you need for P in rule 5. - Jochen Burghardt (talk) 21:22, 30 May 2016 (UTC)

oooookay, gotcha now. Cheers! 92.24.40.239 (talk) 21:53, 30 May 2016 (UTC)

negative numbers disallowed?[edit]

Hi, axiom 1 says "¬(0 = x + 1)", which I take to mean that there are no negative numbers. Is this true, and out of interest, how would permitting them suddenly make thing undecidable? (edit: I mean really simply put as I'm no mathematician).

thanks 78.149.128.58 (talk) 11:11, 14 May 2016 (UTC)

I guess, allowing negative numbers wouldn't make Presburger arithmetic undecidable, but a Bachelor's thesis might be needed to formally prove that. I guess, there are mainly historical reasons for Presburger arithmetic dealing with natural rather than integer numbers. In the 19th century, Peano gave his axioms for natural numbers (Peano axioms); in 1900, Hilbert posed the problem of proving their consistency (Peano axioms#Consistency), but no such proof was found; possibly this motivated Presburger to suggest in 1929 a simpler theory that is decidable (Presburger arithmetic#lead); in 1931, Gödel showed that Peano arithmetic cannot be proven consistent within itself (Peano axioms#Consistency). If I remember right, Gödel's proof doesn't use much features of Peano arithmetic beyond Presburger arithmetic, except for multiplication and prime numbers (he uses them to encode a sequence of symbols into a single number, thus enabling encoding of a logic formula into a number); so those are the features that make things undecidable. The reason why Peano used natural rather than integer numbers was probably that he intended to have an axiom system as simple as possible for the foundation of arithmetic. - Jochen Burghardt (talk) 17:25, 14 May 2016 (UTC)