From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Computer science (Rated B-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.


The article SAT solver states:

SAT is easier if the formulas are restricted to those in disjunctive normal form, that is, they are disjunction (OR) of terms, where each term is a conjunction (AND) of literals (possibly negated variables).

which is clearly in direct contradiction with the intro to this page, which insists on CNF! Could someone please fix? linas (talk) 17:36, 31 March 2008 (UTC)

It is not a contradiction. Sat in disjunctive normal form is trivial: each term gives a satisfying assignment. Sat in conjunctive normal form is, in general, hard, but 2-sat is an easier (though not as trivial as DNF) special case. —David Eppstein (talk) 17:40, 31 March 2008 (UTC)

Minor problem with diagram not matching text[edit]

The diagram uses variables v0, v1, etc. whereas the formulas use x0, x1, etc. Can the owner of the diagram please change the variables to x rather than v to improve ease of comprehension. I could change the text to v1, v2, etc. but the x's are rather more typical. Ross Fraser (talk) 22:49, 3 August 2008 (UTC)

Done. —David Eppstein (talk) 23:05, 3 August 2008 (UTC)

Error in Random 2-satisfiability instances[edit]

The following sentence in the text seems to have its conclusions reversed: "if m/n is fixed as a constant α ≠ 1, the probability of satisfiability tends to a limit as n goes to infinity: if α < 1, the limit is zero, while if α > 1, the limit is one." If α < 1 (fewer clauses than variables) the limit is 1 (almost all are satisfiable) whereas if if α > 1, the limit is zero (almost none are satisfiable". Ross Fraser (talk) 22:50, 3 August 2008 (UTC)

Fixed. Thanks for catching this. —David Eppstein (talk) 23:00, 3 August 2008 (UTC)


Was 2SAT really not known to be in P until 1979?? I'm surprised. (talk) 08:39, 19 November 2010 (UTC)

The algorithms section also cites a different polynomial time algorithm from 1971. But the 1979 one comes first because it's a better way to solve these things: the emphasis in that section is on how to solve it, not so much on putting things into their proper historical order. —David Eppstein (talk) 16:43, 19 November 2010 (UTC)
According to Knuth' The Art of Computer Programming, the observation which was attributed to Aspvall et al. (1979), is due to Melven Krom (1967). I added the corresponding references in the article. Now we hopefully have both, the proper historical order and a succinct explanation of how to solve it. Hermel (talk) 19:50, 19 November 2010 (UTC)
Thanks for finding this. I learned something I didn't already know. —David Eppstein (talk) 20:10, 19 November 2010 (UTC)
Thanks. (talk) 02:14, 20 November 2010 (UTC)
I recently updated the algorithms section to describe in more detail these contributions. Knuth appears to be mistaken: Krom doesn't say anything about strongly connected components. Krom's paper instead uses a technique that is essentially the same as the resolution proof (or transitive closure), which is polynomial but nonlinear, although Krom doesn't say anything explicit about running time. The 1971 paper (Cook) also mentions the same technique and does say that it's polynomial time. And there's also a 1976 paper giving a linear time algorithm that is significantly more complicated than the Aspvall et al one (because it involves interleaving two parallel threads). —David Eppstein (talk) 20:41, 5 February 2011 (UTC)

Proof of Krom's algorithm unclear?[edit]

And, if it is consistent, then the formula can be extended by repeatedly adding one clause of the form \scriptstyle (x\lor x) or \scriptstyle (\lnot x\lor\lnot x) at a time, preserving consistency at each step, until it includes such a clause for every variable. At each of these extension steps, one of these two clauses may always be added while preserving consistency, for if not then the other clause could be generated using the inference rule. Once all variables have a clause of this form in the formula, a satisfying assignment of all of the variables may be generated by setting a variable \scriptstyle x to true if the formula contains the clause \scriptstyle (x\lor x) and setting it to false if the formula contains the clause \scriptstyle (\lnot x\lor\lnot x).

Clearly this shows what the satisfying assignment would be, but it's not really clear that it solves all the original clauses. The paper linked is in a pay-access database, so I can't look up his phrasing. Can anyone help with this? Twin Bird (talk) 20:54, 3 June 2011 (UTC)

That's what the "preserving consistency" part means. But I agree that it doesn't really explain how to find the extending clause. Given the fact that more efficient algorithms than Krom's were developed later, it didn't seem important to describe all the details of the algorithm. —David Eppstein (talk) 21:05, 3 June 2011 (UTC)
How is that what "preserving consistency" means? The formula remains consistent, and it's easily seen that if the new formula has a satisfying assignment, it must be the one that meets all those clauses, but it doesn't show that that assignment actually satisfies it but for the fact that a consistent formula is satisfiable, which is exactly what this part of the paragraph is trying to prove. Twin Bird (talk) 23:17, 3 June 2011 (UTC)
Suppose you follow Krom's procedure and end up with a consistent formula in which, for each variable, there is either a clause (x v x) or (~x v ~x). Then the unique truth assignment determined by these clauses must also satisfy every clause (x v y), for otherwise (if we had the three terms P:(~x v ~x), Q:(x v y), R:(~y v ~y)) applying the inference rule gives QR: (x v ~y), QRQ: (x v x), inconsistent with P. —David Eppstein (talk) 18:17, 4 June 2011 (UTC)
That makes sense, but it's not what's said. All that's said is how to make the clause, and that its consistency is the same, not why it would otherwise reduce to an inconsistent clause. I'd add it myself, but I can't be sure it's what Krom said, not having access to his paper (if only it were a month ago...). And yes, there are better algorithms out there now, but this one seems important for historical reasons, being the earliest seen and the one in use when Cook wrote his paper, and if I had to guess I'd say a similar formulation remains useful to descriptive complexity. Twin Bird (talk) 05:32, 8 June 2011 (UTC)
I don't seem to have access to the original Krom paper any more (I used to), so until I do that's all you're going to be able to get; sorry. —David Eppstein (talk) 05:23, 11 June 2011 (UTC)