# Talk:NP-completeness

(Redirected from Talk:NP-complete)

## This article must be fixed!!! (NP-complete is not a complexity class)

A main problem with the article is the incorrect suggestion that "NP-complete" is a complexity class. Obviously, "NP-complete" is an adjective for describing the class of NP-complete problems. The idea that NP-complete is a class is below only supported by one Wikipedia user (Deco) (and possibly some non-authorative sources like textbooks on algorithms). All complexity theory articles and textbooks use NP-complete as an adjective, and I (as a theoretical computer scientist) have never seen NP-complete used as the name of a complexity class (NPC may be defined as the class of NP-complete problems, but that is "NPC" and not "NP-complete"). This long standing problem has to be fixed in this article. I know of a previous attempt to fix this article, but some forces of natures cancelled these fixes.

Please discuss this, to end this ridiculous issue with this article, if there is really anything to discuss. — Preceding unsigned comment added by 130.233.179.227 (talk) 08:02, 13 November 2013 (UTC)

I agree with this point of view and will rephrase the introduction accordingly. Pintoch (talk) 20:49, 20 July 2014 (UTC)

## Subset sum problem

"no one knows a significantly faster way to solve the problem than to try every single possible subset, which is very slow." I cut this. There is actually a faster way to solve it that can be found on the linked page. It's not efficient, but it is much better than trying all combinations. —Preceding unsigned comment added by 86.144.186.205 (talk) 17:26, August 26, 2007 (UTC)

The question is whether this is really significantly faster. Cutting this also makes it sound like we know for sure there is no efficient algorithm. --Mellum 21:44, 26 August 2007 (UTC)

Yes it's miles faster. Trying every possible subset takes O(2nN) time. It is possible to solve the problem in O(2n/2N) time with the fastest method. The method is described under the heading 'Exponential time algorithm' on the Subset sum problem page. Putting that statement back when it is disproven on another page is silly —Preceding unsigned comment added by 86.156.53.169 (talk) 20:50, August 27, 2007 (UTC)

I suppose this is simply a difference in the interpretation of the term "significantly". Let it suffice to say that no polynomial-time algorithm is known. Dcoetzee 00:02, 28 August 2007 (UTC)

I worked on the assumption that someone might actually try to implement a method in code. In practise the speed difference would be huge and a programmer would want to know about it.

I'd like to suggest that the first example be based on the Traveling Salesman problem. It's much easier for the common person who's not involved in complexity theory to understand as a practical example in my opinion. Also, where do we have pointers to Statistical Physics, Thermodynamics and Self Organizing systems? This seems a significant omission.—Preceding unsigned comment added by 86.156.53.247 (talk) 21:59, August 28, 2007 (UTC) 75.211.0.29 (talk) 18:58, 16 May 2008 (UTC)

## Accuracy

I don't like the word "likely" in this context. Either something is in P or it isn't. We just don't know which one it is yet. What we do know is that if any NP-hard problem is in P then all NP problems are in P.

Lawrence D'Anna

There seem to be some contradictions in this page. First it says "the NP-complete problems are the hardest problems in NP" and later it says "It isn't really correct to say that NP-complete problems are the hardest problems in NP". I didn't update the page because I am not an expert in this, but would someone please consider fixing this confusing bit?

Thanks,

Rodrigo de Salvo Braz

"It isn't really correct to say that NP-complete problems are the hardest problems in NP. Assuming that P and NP are not equal, there are guaranteed to be an infinite number of problems that are in NP, but are neither NP-complete nor in P. Some of these problems may actually have higher complexity than some of the NP-complete problems."

I've heard of phase transitions in NP-complete problems. How does one know that, if P is not equal to NP, then there are problems that are in NP but neither in P nor in NP-complete?

Thanks

David Bernier

There is a detailed explanation on pages 154-155 of Garey and Johnson. It follows from a 1975 theorem of Ladner that if P is not equal to NP, then for every NP-complete problem, there is a subproblem that can be recognized in polynomial time that is neither NP-complete nor in P. Dominus 21:22 Mar 14, 2003 (UTC)

NP-complete are the hardest problems in NP by definition (A problem is NP-complete if it belongs to NP and it can be used to solve any NP problem through a polynomial time reduction). SAT was proven to be NP complete by Cook. Other problems were proven to be NP-complete by solving the SAT problem with them. Jan David Mol 12:10 Jun 2, 2003 (UTC)

Might it be a good idea to change 'problems' in the first line (or even throughout) to 'decision problems'? See the wikipedia page on NP-equivalent for my reason. Léon Planken 22:29 Jun 19, 2003 (UTC)

Hi, does this following thing imply subexponential time for NP-complete problem? i can't find the manuscript but i saw it cited somewhere. does anyone know/read the mansucript? W.D. Smith. Finding the optimum N-city traveling salesman tour in the Euclidean plane in subexponential time and polynomial space. Manuscript, 1988. 68.121.211.14 01:51, 21 Apr 2005 (UTC)

The planar TSP problem is not NP-complete, as far as I know. I don't believe there is any subexponential algorithm for an NP-complete problem (unless something like O(2^n/log n) counts). Deco 06:13, 21 Apr 2005 (UTC)

planar tsp is definitely NP complete from reduction from planar ham cycle 68.121.211.14 19:15, 21 Apr 2005 (UTC)

Oops, okay. You're right, it's safer this way anyway (who knows what people will find). Deco 05:42, 22 Apr 2005 (UTC)
Correct me if I'm wrong, but planar Hamiltonian cycle is the problem of finding a Hamiltonian cycle in a planar graph, i.e. a graph which has no crossing edges when drawn on the plane. Planar TSP is the problem of finding a minimum-cost TSP cycle for points on the plane, i.e. the associated graph is complete with edges between every pair of points, and the distances are the Euclidean distances between the points. I don't see any simple reduction from planar Hamiltonian cycle to planar TSP. 27 Apr 2005

Citing the article:

"In complexity theory, the NP-complete problems are the most difficult problems in NP, in the sense that they are the ones most likely not to be in P. The reason is that if you could find a way to solve an NP-complete problem quickly, then you could use that algorithm to solve all NP problems quickly."

I understand that the issue here is not how "quickly" an algorithm can solve a given problem, but instead that you can calculate the time that the algorithm would need to solve the problem.

## NPI

I cut this paragraph:

"It isn't really correct to say that NP-complete problems are the hardest problems in NP. Assuming that P and NP are not equal, there are guaranteed to be an infinite number of problems that are in NP, but are neither NP-complete nor in P. Some of these problems may actually have higher complexity than some of the NP-complete problems."

Although P ≠ NP implies that NPI = NP−NPC−P is nonempty, problems in NPI can be reduced to problems in NPC whereas problems in NPC cannot be reduced to problems in NPI. It seems reasonable to describe this state of affairs as "problems in NPC are harder than problems in NPI". So the paragraph is misleading.

Discussion of NPI probably belongs somewhere. Gdr 21:40, 2004 Jul 21 (UTC)

## Imperfect solutions - Approximation

Approximation: An algorithm that quickly finds a suboptimal solution that is within a
certain (often known) range of  the optimal one. Not all NP-complete problems have
good approximation algorithms, and for some problems finding a good
approximation algorithm is enough to solve the problem itself.


In the paragraph above, I don't understand the bolded phrase. Can someone explain? -- Sundar 07:26, Nov 25, 2004 (UTC)

I don't understand it either. I guess what is meant that approximating some problems good is NP-hard and so essentially not easier than solving them exactly. I'll just remove it, since the other approaches don't have their "caveats" listed either.

## Suboptimality

Having got used to the notion of optimality with relation only to performance, it didn't occur to me that it can be used with correctness (in the Approximation algorithms paragraph). May be, this is because I'm not a native speaker of English. Can someone reword it making it clear that we are talking about correctness? -- Sundar 08:33, Feb 3, 2005 (UTC)

## NP-hard

Under "formal definition ...", last paragraph about "NP-hard", it says "For example, choosing the perfect move in certain board games on an arbitrarily large board is (informally) "strictly harder than" any NP problem."

Shouldn't that be "at least as hard as any NP problem" rather than "strictly harder than"? For example, Garey and Johnson, page 109 (first page of chapter 5), says "at least as hard as the NP-complete problems." --Bubba73 00:23, 25 May 2005 (UTC)

With most games, you're right that "at least as hard" is the most we can say at this point. Some forms of Go are EXP-complete, so that it can't be solved in P, but might still be solvable in NP (although probably not). I suppose there might exist games which are strictly outside NP, but I'm not familiar with any. Deco 23:54, 24 May 2005 (UTC)
That paragraph probably needs to be clarified a little more. NP-hard means "at least as hard as", but some problems are "strictly harder". --Bubba73 00:23, 25 May 2005 (UTC)

## Is this problem NP-complete?

I found this problem and wondered if it was NP-complete: You have a bunch of tasks that take different amounts of time and some tasks can't be started until other tasks are finished. You have unlimited resourses, can do different tasks in parallel, and are trying to find the shortest time to complete all the tasks.

I'm going to assume good faith that you're not trying to get us to do your homework for you. What you describe is essentially a scheduling problem, also called job or task sequencing, and was one of Karp's 21 NP-complete problems. He showed it NP-complete by reduction from the knapsack problem. The graph you describe is called a task dependency graph. Technically, the precise problem you describe is actually NP-hard (it's not NP-complete because it's not even a decision problem). Deco 05:24, 19 Jun 2005 (UTC)

Thanks!--SurrealWarrior 18:43, 20 Jun 2005 (UTC)

## Proofs

Why don't we have any np-completness proofs on wikipedia? In a lot of the invidual problem pages I notice that it might mention the kind of reduction involved, but it never goes into detail... Is there a reason for this? -Averisk 08:48, 12 December 2005 (UTC)

Actually we do. Usually we describe proofs in articles associated with particular theorems, such as Cook's theorem. Most NP reductions are simple enough though that there's no reason we couldn't include them in the articles for the problems (or at least a proof sketch). One issue, however, is that books usually present reductions in a specific order to avoid relying on circular reasoning, for example reducing two problems to each other — this is more difficult in our flat, interlinked article organization. Deco 09:02, 12 December 2005 (UTC)
Wow, fast response. Yes I see what you mean. Maybe we could make seperate pages for the proofs and follow a specific order like you said? -Averisk 09:14, 12 December 2005 (UTC)

## Travelling salesman problem is not NP-complete

This article is rather careless: it repeatedly claims that the travelling salesman problem is NP-complete. This isn't the case: the TSP is a function problem and only decision problems can be NP-complete. (TSP is complete for FPNP.) Of course, there is a decision version of the TSP that is NP-complete (given a weighted graph and a bound k, is there a tour with total weight < k?). But that's not what people usually mean when they say "TSP": they want to know the actual tour!

There are similar problems with some of the other problems mentioned (e.g. in Hamilton path we generally want the path, in Boolean satisfiability we want the satisfying assignment and so on. I've made one change to help with this, but it needs more work. Many authors adopt a convention of using all-caps for decision problems (thus "Hamilton path" vs. HAMILTON PATH). Gdr 21:59, 6 January 2006 (UTC)

I agree that we should be precise, but it is fairly common especially in informal forums to talk about the function problem when one really means the decision problem. One could argue that "traveling salesman problem" is an ambiguous term that can refer to either version. If we are to be precise about this though, let's adopt a convention that avoids irritating verbosity, such as the all-caps thing you describe. Deco 22:09, 6 January 2006 (UTC)

I found use of TSP as an example very confusing. It seems to me that far from being "friendly" or "approachable" it obviously conflicts with your definitions of what NP-Completeness means. A TSP result is fairly obviously just as hard to verify as compute as you would need to at least partially compute all other routes to prove there is no shorter route out there. This is not true of the decision version which is simple to verify as per your definition. If TSP was in NPC as defined above then that would prove P=NP wouldn't it? Using this example confused me when I read the article. [Rob: August 2010] —Preceding unsigned comment added by 86.168.27.244 (talk) 08:15, 13 August 2010 (UTC)

yes, but from what I understand, you partially calculate all the other routes enough to prove that they are longer (I think by substituting subsets of segments) and do it in polynomial time. Might be a big calculation but growth of the calculation with respect to N has a known bound that is less than the growth incomplexity of finding the solution you are testing. 207.237.159.111 (talk) 15:58, 11 April 2012 (UTC)

## Arbitrary reward

I have removed the following statement:

Nobody has yet been able to prove whether NP-complete problems are in fact solvable in polynomial time, making this one of the great unsolved problems of mathematics. The Clay Mathematics Institute in Cambridge, MA is offering a $1 million reward to anyone who has a formal proof that P=NP or that P≠NP. Clearly this is vandalism by some jokester who thinks it would be funny for someone to jump out of their tub in the nude yelling Eureka! when they find a trivial solution to P≠NP. 59.112.43.12 10:15, 4 October 2006 (UTC) No, it's true, even if P=NP is frustratingly impossible to disprove. Now restored. (Woops, SAT is based on the size of input, not the number of literals.) Davilla 17:09, 4 October 2006 (UTC) Yes, and CMI is offering a$1 million reward. It's a Millenium Prize Problem. Deco 21:46, 4 October 2006 (UTC)

I proved for some years ago, and the problem is in the Journals and in the fact that Cook's Theorem is false: http://www.archive.org/search.php?query=%22Juan%20Manuel%20Dato%22 —Preceding unsigned comment added by 79.109.168.68 (talk) 11:59, 7 March 2011 (UTC)

## So space doesn't matter???

My edits were reverted. Sorry you can't dispute the Space-time_tradeoff. In the P=NP question space is not irrelevant. If space was not a consideration then I should be wining the Nobel prize in mathematics, since I have already done a proof in my homework assignment that ***ALL*** NP class problems can be solved in polynomial time, hence P=NP (if you COMPLETELY ignore space considerations).

Here is an example proof: Start by using DNA computing/(exponentially reproducing cell cultures) to generate 2^N number of input strings in time N. Then use a list ranking algorithm that implements pointer jumping so that it can assign a unique value to all 2^N inputs in time N. Then using a genetically engineered cell culture, create 2^N number of DNA computers in time N to verify all 2^N inputs in at most time and space theta(n^k) (aka poly time and space) in parallel, the only input string remaining is the satisfying answer. Since all NP class algorithms have to be run in polynomial time and space, using the above method can solve all NP problems in polynomial time, but ONLY with an exponential space trade-off. This is a very simple application of massive parallelism. Since I have just proved myself I will not wait to correct the article. If you want to (or do) revert it post your reasons here. I should point out that I am in no way an expert on this topic, but I should also point out that my above logic has been approved by my professor who makes his living off of this topic. --ANONYMOUS COWARD0xC0DE 07:35, 1 December 2006 (UTC)

I'm not familiar with the subject matter, but please see core content policies including verifiability and no original research. If this idea has been published elsewhere, please cite a reliable publication to back up the edits. I imagine that's what other editors more familiar with the matter might say. Thanks for your time. Luna Santin 07:37, 1 December 2006 (UTC)
 At first sight, both examples give the impression that we have now an
efficient solution of big instances of NP-complete problems as only polynomial
time is needed. But, it is very important to regard that the exponential
complexity has not been eliminated, but it has only be shifted from time to
space. [1]

 Other applications include attacking the governmental encryption scheme DES
using molecular computers, solving the satisfiability problem, and using DNA
to factor integers. [2]

However they both reference the same work by Dr. Leonard Adleman who was able to solve a Hamiltonian path problem instance with DNA computations. --ANONYMOUS COWARD0xC0DE 05:41, 23 December 2006 (UTC)
NP is defined using a particular machine model, namely nondeterministic Turing machines. On nondeterministic Turing machines, polynomial time implies polynomial space. Your "proof" does not use nondeterministic Turing machines, and therefore does not apply to NP. --Mellum 08:03, 1 December 2006 (UTC)
 A consequence of this definition is that if we had a polynomial time algorithm
for C, we could solve all problems in NP in polynomial time.

and I am also saying that all problems in NP can be solved in polynomial time (it may take all the space of the know universe, but it can still be solved in polynomial time). I can (theoretically) construct a machine to solve all NP class problems in polynomial time, that applies to NP and makes redundant the above quoted statement. Yes non-deterministic turning machines are an integral part of the discussion of NP, but that doesn't mean that massively parallel deterministic algorithms should be ignored in the discussion of np-completeness. --ANONYMOUS COWARD0xC0DE 05:41, 23 December 2006 (UTC)
What does "applies to NP" mean? If it means that any program for your machine can be converted to a program for a nondeterministic Turing machine with only a polynomial slowdown, you would have solved the P vs. NP problem (which seems somewhat unlikely); otherwise, your statement is irrelevant. --Mellum 20:21, 28 December 2006 (UTC)
You said my "proof" does not apply to NP. I vehemently disagree; I have a machine that can solve all NP class problems in polynomial time (see my cited references or my proof for an example), therefore that machine can be considered in the discussion of NP class problems. I am NOT talking about a nondeterministic Turing machine, I am talking about a Massively_parallel deterministic Turing machine (but I'm not talking about a simple supercomputer with thousands of CPU's I'm talking about a DNA computer with billions of "CPU's" or whatever you want to call it). I am not making any statements about P=NP or that P is a strict subset of NP. I am only saying that all NP class problems can be computed in polynomial time at the expense of extra space complexity. This is almost word for word what I have already sourced "But, it is very important to regard that the exponential complexity has not been eliminated, but it has only be shifted from time to space.". So then after you shift it from time to space you now have to ask the P vs. NP question with regard to space. So in the end the P vs. NP question is with regard to parallel "work"=(time×space or time×(number of CPU's)). The article's statement "A consequence of this definition is that if we had a polynomial time algorithm for C, we could solve all problems in NP in polynomial time." is extremely redundant since all NP class problems can be decided in polynomial time. The people at Complexity_classes_P_and_NP have accepted my edit "In essence, the P = NP question asks: if positive solutions to a YES/NO problem can be verified quickly in polynomial time, can the answers also be computed quickly in polynomial time and space?", because without considering space complexity the entire discussion becomes redundant because all NP class problems can be solved in polynomial time. --ANONYMOUS COWARD0xC0DE 04:55, 30 December 2006 (UTC)
The term "polynomial time" must refer to a particular machine model, since otherwise it is meaningless. In this context, it refers to deterministic Turing machines. If you allow other machines, you might as well take an NP oracle machine and claim that everything in NP can be solved in constant time. But this is merely a shifting of goal posts, and irrelevant to the question at hand. So please don't add your edits again. --Mellum 11:57, 30 December 2006 (UTC)
I agree. You assume that the sentence implies the context of deterministic Turing machines, however that key phrase is not explicitly stated anywhere in that sentence. I never thought to complain on these grounds. The "algorithm" mentioned in the sentence could just as easily be intended for a nondeterministic Turing machine and no such restriction is placed in the sentence. So now the only thing I have left to dispute is whether or not parallel computations are deterministic or not. Just respond to that in my other post(thread) below if you wish. --ANONYMOUS COWARD0xC0DE 12:18, 2 January 2007 (UTC)

All NP and P class problems are subsets of PSPACE therefore I am justified. --ANONYMOUS COWARD0xC0DE 05:22, 30 December 2006 (UTC)

Your argument is not only original research, but is not applicable to the problem. Assuming for argument's sake that you could indeed make each cell in an exponentially reproducing culture do useful verification computation, and you could support the resources needed to grow that culture to useful problem sizes, then you could solve the problem in polynomial time, but this is irrelevant, because the question of whether P = NP is not about whether arbitrary computational mechanisms can solve NP problems in polynomial time, but whether deterministic Turing machines (used in the definition of P) can do so. In fact, your DNA computing/cell culture ideas are supposed physical realisations of a nondeterministic machine, which are often said to be "making copies of themselves" as a way of understanding them. Thus it should be no surprise that they can solve NP problems in polynomial time, since NP is defined as the problems solvable by a nondeterministic machine in polynomial time.
As for the matter of space, deterministic Turing machines as used in the definition of P do not ever consume more space than time, and I'm heading over to Complexity classes P and NP to revert your redundant additions there as well. Deco 00:55, 1 January 2007 (UTC)
User:Mellum beat me to it, thanks. Deco 00:57, 1 January 2007 (UTC)
I am not certain as to what you are calling original research, my assertion that since NP is a subset of PSPACE that I am justified, or my "proof". As far as I know Massively_parallel machines are not nondeterministic, nor are supercomputers. Space concerns are the only thing I am talking about. Let me make this clear I am talking about the sentence "A consequence of this definition is that if we had a polynomial time algorithm for C, we could solve all problems in NP in polynomial time." I read this sentence as "A consequence of this definition is that since we have a polynomial time algorithm for C, we can solve all problems in NP in polynomial time." Indeed Mellum pointed out that poly-time could just as well be referring to algorithms for nondeterministic Turing machines. Are you saying that supercomputers are not deterministic Turing machines (I agree that non-parallel machines never use more space than time)? I think I may understand why you keep referring to my "proof" as nondeterministic, because when I cited a source, it made use of DNA-computation with nondeterministic aspects, but in my "proof" all possible search paths were formed, which would negate any nondeterministic effects. At the time I wasn't really paying any attention to nondeterminism, just space-time trade-off examples that closely resembled my "proof". I hope that re-clarification helps. Just an observation Mellum stated "Your "proof" does not use nondeterministic Turing machines" yet Deco stated "your DNA computing/cell culture ideas are supposed physical realisations of a nondeterministic machine" so as I briefly assumed in my previous sentence I am assuming one of you are referring to my citation and the other my "proof". --ANONYMOUS COWARD0xC0DE 12:18, 2 January 2007 (UTC)
In short: highly parallelized machines with a fixed number of parallel processors can be simulated by deterministic Turing machines with at most a polynomial factor slowdown, whereas machines with a number of parallel processors that increases over time may not be, and in particular I don't know of any way to do this with the two models that you describe. Deco 15:17, 2 January 2007 (UTC)
Thanks! Just one more note, I think it is not important that there are a fixed number of CPU's, but only that the maximum number of CPU's in use, at any one time, in relation to the problem size is exponential, and therefore no polynomial simulation on a universal Turing machine can be made. So while my machine may be deterministic (not DTM just D) and massively parallel it is not Turing equivalent, and therefore not relevant. --ANONYMOUS COWARD0xC0DE 08:01, 3 January 2007 (UTC)
Well, it's Turing equivalent, it's just that a straightforward simulation by a DTM requires exponential slowdown, such that the DTM would require exponential time where the model takes polytime. If P=NP then it could simulate it in less time. But that sort of defeats the point. You're right that if the number of procs is polynomial then again the simulation is possible in polytime. Deco 12:19, 3 January 2007 (UTC)
Ok, I had a misconception (mistook capable of emulation for efficient (poly-time) emulation). So in essence my changes of if we had a polynomial time (and space) algorithm for C, we could solve all problems in NP in polynomial time (and space). could be stated as if we had a polynomial time algorithm (on a UTM) for C, we could solve all problems in NP in polynomial time. --ANONYMOUS COWARD0xC0DE 02:29, 10 January 2007 (UTC)

I take it back, PSPACE only means that the problem can be solved with poly-space algorithms, so this fact alone is not justification. For some reason I was thinking along the lines that any algorithm for a PSPACE problem must operate in polynomial space. --ANONYMOUS COWARD0xC0DE 12:18, 2 January 2007 (UTC)

## With respect to other reductions headline

I feel that the section "With respect to other reductions", should be moved to the following page: http://en.wikipedia.org/wiki/Polynomial-time_many-one_reduction . I shall do that, if there are no objections.As the glorious weep (talk) 08:14, 19 November 2007 (UTC)

I believe that would be inappropriate. This is the correct place to discuss this, because it involves the class NP-complete and its definition, rather than reductions in general. Why would you want to move it? Dcoetzee 07:56, 19 November 2007 (UTC)
This section does not seem to fit into this particular article. Perhaps it is not well integrated. As the glorious weep (talk) 08:22, 19 November 2007 (UTC)

1. briefly discuss that the encoding of the problem can have an impact on the evaluation of complexity. In such cases, the most concise encoding should be used. This could answer ANONYMOUS COWARD0xC0DE, question, as I believe this is what he's talking about. page 974 of Introduction to Algorithms,second edition, Cormen, Leiserson, rivest

2. Need to clarify why "To prove that an NP problem A is in fact an NP-complete problem it is sufficient to show that an already known NP-complete problem reduces to A."

3. We should clarify why having a polynomial time solution to 1 NP-complete problems causes the whole thing to collapse

Nobody can: http://www.archive.org/details/CorrectionToTheTheoremOfCookwiki-version —Preceding unsigned comment added by 79.109.168.68 (talk) 12:41, 7 March 2011 (UTC)

4. Perhaps mention that NP problems are the problems that can be solved on a non-deterministic turing machine in polynomial time.

5. Talk about NP-complete problems being considered less tractable than P problems, and why. —Preceding unsigned comment added by As the glorious weep (talkcontribs) 08:10, 19 November 2007 (UTC)As the glorious weep (talk) 08:15, 19 November 2007 (UTC)

## Rename page to NP-Completeness?

Per WP:MOS, "article titles generally comprise nouns or noun phrases." NP-complete is an adjective. NP-completeness is the noun form. Oren0 (talk) 18:11, 28 November 2007 (UTC)

Nope. "NP-complete" is the name of a complexity class, and so is a noun phrase. Dcoetzee 19:21, 28 November 2007 (UTC)
I would also rename it into NP-completeness. "NP-complete" is generally not used as the name of a complexity class but clearly as an adjective. ylloh (talk) 13:01, 29 November 2007 (UTC)
It's frequently used as the name of a class, on Complexity Zoo and in many papers. This is also the established naming scheme on Wikipedia (see P-complete, Sharp P complete, EXPTIME-complete). It's true that it's used more often in the adjective form, but that doesn't make the noun form an illegitimate name. Dcoetzee 23:05, 29 November 2007 (UTC)
Even if it can be used as a noun phrase, that's not how the article uses it. The lead needs to be rewritten if we're meaning NP-complete to refer to a class. Read P-complete's lead: "In complexity theory, the complexity class P-complete is..." Note that P-complete is used as a noun. Compare that to the lead of this page: "In complexity theory, the NP-complete problems are..." Here, NP-complete is used as an adjective. If indeed the article is supposed to be about the class, the lead should look like P-complete's IMO. Oren0 20:12, 3 December 2007 (UTC)
Agreed. Fixed. Dcoetzee 21:12, 3 December 2007 (UTC)

## Intro for laymen

I added a general introduction for laymen. I understand that NP-complete cannot be accurately described in layman's terms, however, some type of intro needs to be done here that is without jargon. The accurate introduction, for CS people, is still there and immediately follows the layman's introduction, so nothing has been lost. --Clangin (talk) 18:39, 6 July 2008 (UTC)

• Nice try, but I still can't understand the intro;) Seriously, more explanation is needed here for the average reader. I haven't the foggiest what it's on about. Malick78 (talk) 14:25, 14 February 2009 (UTC)

I changed the definition to "being in NP and being NP-Hard" - that's at least how I learnt it, and I found that it should be specifically mentioned at the top of the article. 114.36.56.75 (talk) 06:53, 25 June 2010 (UTC)

## NP-complete is a CLASS???

I think it is highly unusual that the authors of this page have insisted on calling NP-complete a COMPLEXITY CLASS. Grammatically, "NP-complete" is an ADJECTIVE. NP-completeness is a property of a computational problem. Of course, there is a corresponding class as well, but NP-complete is not that class. I think this annoying feature has been fixed earlier but somebody seems to want to have it the other way. Before a discussion on this is started, I would suggest trying to find any evidence for the idea that NP-complete is a class. A quick Google search does not seem to be produce any evidence for this idea.

—Preceding unsigned comment added by 203.143.165.107 (talk) 01:02, 14 August 2008 (UTC)

NP-complete is both an adjective and a class. There are many published sources that use the term this way; try this Google search. It's also sometimes called NPC. Dcoetzee 22:58, 14 August 2008 (UTC)

Please find AN AUTHORATIVE SOURCE, like a theory of computation text book, where NP-complete is a class. Otherwise I am going to claim that the _mistaken_ view stems from Wikipedia! —Preceding unsigned comment added by 203.143.165.111 (talk) 04:37, 4 November 2009 (UTC)

How about Chapter 34 of Algorithms by Cormen, Leiserson, Rivest and Stein. This is the de facto standard algorithms book. On page 968 of the second edition, it states, "Informally, a problem is in the class NPC--and we refer to it as being NP-complete--if it is in NP and it is as "hard" as any problem in NP." Then, on page 986, it states, "We also define NPC to be the class of NP-complete languages."

So to me, it sounds like this book defines NP-complete as a description of the hardest languages (problems) in NP. NPC is the class of NP-complete languages. — Preceding unsigned comment added by Etschreiber (talkcontribs) 00:22, 12 February 2013 (UTC)

There's an apparent self-contradiction in the introductory paragraph -- or at the very least it's confusingly worded. The statement "Any given solution to the problem can be verified quickly" seems at odds with the statement "The most notable characteristic of NP-complete problems is that no fast solution to them is known." 194.60.38.198 (talk) 14:40, 19 November 2008 (UTC)

These aren't at all self-contradictory. A given solution can be efficiently verified, but there's no known efficient way to find a solution in the first place. I've revised to clarify Dcoetzee 17:55, 19 November 2008 (UTC)
That does look a bit clearer, though actually on a second reading I did figure out that was what it was saying. Thanks. 194.60.38.198 (talk) 09:31, 20 November 2008 (UTC)

## Opening lines

I have to admit to being pretty new to computational complexity. However, based on my recent reading I'm not sure that the opening lines properly define an NP-complete problem: In computational complexity theory, the complexity class NP-complete (abbreviated NP-C or NPC, NP standing for Nondeterministic Polynomial time) is a class of problems having two properties: Any given solution to the problem can be verified quickly (in polynomial time); the set of problems with this property is called NP. If the problem can be solved quickly (in polynomial time), then so can every problem in NP.

So where does the 'N' come into it? It's surely that obtaining what turns out to be the correct solution can only be done with a non-deterministic automaton, and as such may require exponential time to complete - right? Fizzackerly (talk) 20:18, 16 April 2009 (UTC)

Actually, the "can be verified in polynomial time by a deterministic Turing machine" definition is equivalent to the "can be decided in polynomial time by a nondeterministic Turing machine" definition. We eschew mention of nondeterministic Turing machines in this introductory article because they're less intuitive and require more background. Also, you're confusing your terminology - non-deterministic automata decide the set of regular languages, which can be decided in linear time (all regular languages are in P). Additionally, it is not known whether NP-complete problems require exponential time to decide (this is essentially what the P=NP problem is all about, although it's conceivable that someone might find a subexponential but superpolynomial solution for an NP-complete problem even if P ≠ NP). Dcoetzee 08:57, 17 April 2009 (UTC)
It would be more accurate to say "a 'yes' result can be verified in polynomial time...". I think the fact that we don't know whether or not a 'no' result can be verified in polynomial time is probably the key to an eventual proof. While my Ph.D. is in Human Factors, not Theoretical Computer Science, I do have some training and have been fiddling with the problem off and on for years. If someone could prove that a 'no' result to something like, say, CLIQUE could not be verified in polynomial time, you'd have your proof right there. Unfortunately, it's hard to find references to such work for those of us not in the know (anyone have a pointer or two to offer?) FWIW, I also think that the proof would lead to a nice way of classifying various subproblems/inputs for various NP-complete problems (showing how Combinatorics moves us up from polynomial to exponential, etc.—great for first or second semester courses on the subject). Matt Plumlee 20:08, 26 September 2012 (UTC) — Preceding unsigned comment added by Mattplumlee (talkcontribs)

## Social implications

Has anything been written on the social implications of NP=P? I mean if NP complete turns out to be easily solvable what will fall apart? Secure encryption? Smart cards? Personal privacy? Or none of these? --BozMo talk 20:23, 16 April 2009 (UTC)

Impossible to predict. Encryption hinges on the availability of some practical trapdoor one-way function - even if all problems in NP are solved efficiently, one is likely to be discovered at a higher level of complexity, taking advantage of the new efficient primitives for its trapdoor. Dcoetzee 09:01, 17 April 2009 (UTC)
For encryption, wouldn't you need to be able to verify the key in polytime? In short, wouldn't any encryption not based on NP require tremendous time to decrypt, even with the key? 66.202.66.78 (talk) 07:49, 4 June 2010 (UTC)
Russell Impagliazzo has written a widely-cited paper on some of the implications of various outcomes to unknown problems such as P=NP and the existence of one-way functions. It is well worth reading. A Personal View of Average-Case Complexity. —Mark Dominus (talk) 18:49, 4 June 2010 (UTC)

Can we not include that NP-Complete problems also are not solvable in polynomial time? this would surely have benefited me and more clearly explains what NP-Complete really is. --SentinelAlpha (talk) 16:08, 9 November 2010 (UTC)

But we do not know whether NP-complete problems are solvable in polynomial time, that's what the whole P vs. NP business is about. The fact that no polynomial-time solution to NP-complete problems is known at present is mentioned several times in the article.—Emil J. 16:25, 9 November 2010 (UTC)

I generated an algorithm that cannot be decrypted. The demonstration is in Spanish yet, but the correspondence exists and it cannot be discovered in its Hilbert Space because the probability of every symbol is EXACTLY the same. I programmed an example in Python 3.0, if you are really interested contact me: jumadaru@gmail.com or read about: http://www.archive.org/search.php?query=%22Juan%20Manuel%20Dato%22 It is based in generating a correspondence of data and working with them to get properties of certification like in digital money but only with the containers of instances. So we cannot guess what are the values of instances..., almost if we do not try it one by one.

There is another to demonstrate NP differs P, that is with the #P. I'm finishing my Theory about that, but I wrote a book about this. The conclussions between the connection with Energy and Entropy is amazing. But, of course, I am the author. Is there anyone interested? The American Society has problems with the maquetation of my work. That sounds too too stupid! Do you know any Journal where I can show my work? —Preceding unsigned comment added by 79.109.168.68 (talk) 12:35, 7 March 2011 (UTC)

Please don't edit this page unless you have a PhD and a minimum 10 years CS research experience. The article is not in a good state. The previous first sentence claiming that NP-complete is a complexity class is just pure nonsense. Of course, you can define a complexity class (a class of decision problems) that coincides with the class of NP-complete problems, but this is not how "NP-complete" is defined anywhere. If you believe otherwise, please show an AUTHORITATIVE SOURCE (Google or Wikipedia don't count here.) —Preceding unsigned comment added by 203.143.165.111 (talk) 04:55, 4 November 2009 (UTC)

NPC can be thought of as a class. I don't see any problem with this. Any set of problems can be defined to be a complexity class, and it so happens that NPC is a class. If you want a reference, here you go: The complexity zoo article on NPC. --Robin (talk) 13:27, 4 November 2009 (UTC)
Certainly the "correction" by this NP contains an incorrect definition so I don't think we should take the "not a class" claim too seriously. The French are semantically distinct in this kind of way but most people use the language pretty loosely. --BozMo talk 20:03, 4 November 2009 (UTC)
If we held to those criteria around the wiki, there wouldn't be a wiki. Twin Bird (talk) 23:11, 23 July 2011 (UTC)

## Formal definition bug?

"NP-complete is a subset of NP, the set of all decision problems whose solutions can be verified in polynomial time; NP may be equivalently defined as the set of decision problems that can be solved in polynomial time on a nondeterministic Turing machine. A problem p in NP is also in NPC if and only if every other problem in NP can be transformed into p in polynomial time. NP-complete can also be used as an adjective: problems in the class NP-complete are known as NP-complete problems."

I'm pretty sure that is incorrect. NP problems can not be solved in polynomial time. That is the whole point right? Either the nondeterministic Turing machine has a special property, someone tried to vandalize it, or someone missed a "not" or "verified" somewhere. —Preceding unsigned comment added by 148.85.245.244 (talk) 13:53, 21 December 2009 (UTC)

NP = Solvable in polynomial time on a non-deterministic TM. P = Solvable in polyomial time on a deterministic TM. --Robin (talk) 14:38, 21 December 2009 (UTC)
Probably worth (for people for whom non-deterministic TM is gook) saying NP=checkable in polynomial time? Or is that technically wrong? --BozMo talk 07:34, 22 December 2009 (UTC)
That's fine too. The article does say that NP is "the set of all decision problems whose solutions can be verified in polynomial time." --Robin (talk) 14:07, 22 December 2009 (UTC)

## Waste of time?

An expert programmer should be able to recognize an NP-complete problem so that he or she does not unknowingly waste time trying to solve a problem which so far has eluded generations of computer scientists.

I'm not sure I agree. It's like saying: "Don't waste time on Fermats last theorem!" What if you find a new NP-complete problem and it turns out you can solve it... —Preceding unsigned comment added by 159.171.152.2 (talk) 11:00, 21 January 2010 (UTC)

That's true. I've changed it to suggest the programmer should know that the problem is NP-complete, so that he/she knows it is a hard problem, and there are ways around it in practice, like approximation algorithms. --Robin (talk) 21:00, 2 April 2010 (UTC)
No, it is not true. The probability of anyone solving the hard cases of an NP-complete problem (if N is not very small) in a reasonable amount of time is virtually nil. Enormous amounts of effort are wasted on insoluble problems, not even counting the efforts of those who are consciously trying to make a break-through. Not everyone can be an Andrew Wiles and even one such as he expended many years of effort. And, by the way, there is more reason to doubt that NP=P than there was to doubt that Fermat's last theorem was true and provable. JRSpriggs (talk) 00:46, 3 April 2010 (UTC)
There are scientists who believe that P is NP. I don't think Wikipedia should be making value judgments about how people use their time. Can we change the word "waste" to "spend"? With this change it will say that knowingly spending time on NPC problems is better than doing so unknowingly. As opposed to the current version, which just says spending time working on an NPC problem is a waste of time. --Robin (talk) 01:21, 3 April 2010 (UTC)
I do not think that you should be making a value judgment that making value judgments is bad. (Thus you contradict yourself.) Nothing in this article is forcing anyone to do anything. It is just giving them information which may help them (or not). My value judgment is that more information is better than less information. So my judgment is that the version to which I reverted is better than your version. Judgment is good, if based on facts. JRSpriggs (talk) 18:16, 3 April 2010 (UTC)
Firstly, I said Wikipedia shouldn't be making value judgments, not me. So I do not contradict myself. Secondly, information is good. I'm all for the programmer knowing that he/she is stuck with an NP-complete problem. I think you misunderstand what I'm saying. What bothered the IP is that Wikipedia suggests that one should not work on algorithms for NP-complete problems, and I see the IP's point. Suggesting someone to not work on a problem isn't a fact, it's your opinion (and the opinion held by many experts). It cannot be stated on Wikipedia as a fact, it may be stated as "many experts believe..." along with a citation. --Robin (talk) 18:52, 3 April 2010 (UTC)
The version I reverted said "... he or she knows that an efficient algorithm for that problem has eluded generations of computer scientists." which could be read as implying that there exists an efficient algorithm (which is probably false).
The other version said "... he or she does not unknowingly waste time trying to solve a problem ...". This merely seeks to make the person aware that he might be wasting his time, if he chooses to work on the problem. He is still free to go ahead anyway. JRSpriggs (talk) 19:13, 3 April 2010 (UTC)
How about replacing the word "waste" with the word "spend", so that it reads "... he or she does not unknowingly spend time trying to solve a problem ...". Using the word "waste" suggests that it is always a waste of time to try to find an algorithm for an NPC problem. --Robin (talk) 21:34, 3 April 2010 (UTC)
Done. JRSpriggs (talk) 02:32, 5 April 2010 (UTC)

## NP = P ?

Have anyone seen this? I have not seen any discussion about it, but maybe it can be true. —Preceding unsigned comment added by Crazy FIZRUK (talkcontribs) 10:04, 22 April 2010 (UTC)

The paper is short, not written in very good English and does not look likely to be right. --BozMo talk 20:31, 22 April 2010 (UTC)

## If-then characterization of completeness in lead

The definition in the lead has two parts (the first being NP). The second part is a characterization of completeness within NP. I changed it from "If the problem can be solved quickly (in polynomial time), then so can every problem in NP." to "Given an oracle which solves this problem, every problem in NP would be solvable quickly (in polynomial time).". My edit summary said "lead: if-then construction ignores possibility that there are NP problems which are neither P nor NP-complete.".
RobinK (talk · contribs) reverted me, saying "I don't see the problem; the new text seems more confusing (and requires the more advanced concept of an oracle)".

I only changed it because the original version may be false. Suppose there is a problem, A, which is neither P nor NP-complete. "A can be solved in polynomial time." is false. So the implication "If A can be solved in polynomial time, then so can every problem in NP." must be true (see material implication according to which ${\displaystyle (B\rightarrow C)\leftrightarrow (\lnot B\lor C)}$). Thus with the old version of the lead, one would have to conclude that A is NP-complete, but it is not.

Thus I will revert RobinK's reversion of me. JRSpriggs (talk) 08:42, 17 May 2010 (UTC)

That section of the article is trying to characterize NP-complete problems. (it begins: "NP-complete is a class of problems having two properties: ...) The sentence under discussion says:
If the problem can be solved quickly (in polynomial time), then so can every problem in NP.
"The problem" here refers to a hypothetical NP-complete problem. So this is saying that if some problem A is NP-complete, and if A is in P, then NP⊆P. This is correct.
You said "Suppose ... A ... is neither P nor NP-complete", and derived a contradiction. But A was already assumed to be NP-complete, so it is not surprising that you were able to derive a false conclusion.
I think the original phrasing was correct, and I also think it should be unnecessary to introduce oracles here. —Mark Dominus (talk) 12:57, 17 May 2010 (UTC)
This is supposed to be a rough definition. A definition is an equivalence between a definiendum (word defined, in this case "NP-complete") and its definiens (the expression which gives its meaning). In order to work, the equivalence must hold whether NP-complete is true or false of the problem A. JRSpriggs (talk) 17:22, 17 May 2010 (UTC)
I see your point now, thanks. —Mark Dominus (talk) 13:18, 18 May 2010 (UTC)
I understand your point now, thanks. However, I still don't think this is the best way to present the definition. I mean we're trying to achieve some balance between simplicity and being absolutely correct, because it's still incorrect as stated. To make it correct, we would have to also specify that you can only use 1 query and must output the answer of the oracle (i.e., a Karp reduction). --Robin (talk) 20:58, 17 May 2010 (UTC)
How do we say the second part of the definition in a way that is both correct and yet more intuitive than saying "Every problem in NP is reducible to C in polynomial time."?
How about "Any NP problem can be converted into this one by a transformation of the inputs in polynomial time."? JRSpriggs (talk) 01:56, 18 May 2010 (UTC)
Maybe transforming inputs might give an intuitive definition. I can't think of a better one, so go ahead with this one. --Robin (talk) 16:15, 18 May 2010 (UTC)
Done. JRSpriggs (talk) 05:17, 19 May 2010 (UTC)

## Definition

The second property of an NP-complete problem is given as: "Any NP problem can be converted into this one by a transformation of the inputs in polynomial time." Maybe I am just missing some complex math term, but I dont understand what is meant by "this one" —Preceding unsigned comment added by 129.170.26.240 (talk) 13:11, 3 June 2010 (UTC)

"this one" means "this problem (for which we are defining what it means to be NP-complete)". JRSpriggs (talk) 21:41, 3 June 2010 (UTC)

## A heuristic for timetabling problems, might be used on other NP-complete problems

A practical, fast and efficient heuristic for timetabling problems, may be possible to apply it to other NP-complete problems

I am author of a free software, named FET - free software - open source - for school, high-school and university timetabling. The page is: http://lalescu.ro/liviu/fet/

The program is applied with success in many institutions.

Since FET is a fast heuristic method for solving the timetabling problem, which is an NP-complete problem, the method might be applied to other NP-complete problems.

Note: by solving, I mean heuristically solving practical cases, like FET does. I do not claim to solve NP-complete problems polynomially.

I tried the FET algorithm on other NP-complete problems, but unfortunately it does not give me satisfactory results. Maybe I am missing something.

The description of the algorithm:

The algorithm is heuristic.

Input: a set of activities A_1...A_n and the constraints.

Output: a set of times TA_1...TA_n (the time slot of each activity. Rooms are excluded here, for simplicity). The algorithm must put each activity at a time slot, respecting constraints. Each TA_i is between 0 (T_1) and max_time_slots-1 (T_m).

Constraints:

C1) Basic: a list of pairs of activities which cannot be simultaneous (for instance, A_1 and A_2, because they have the same teacher or the same students);

C2) Lots of other constraints (excluded here, for simplicity).

The timetabling algorithm (which I named "recursive swapping"):

   1) Sort activities, most difficult first. Not critical step, but speeds up the algorithm maybe 10 times or more.

   2) Try to place each activity (A_i) in an allowed time slot, following the above order, one at a time. Search for an available slot (T_j) for A_i, in which this activity can be placed respecting the constraints. If more slots are available, choose a random one. If none is available, do recursive swapping:

       2 a) For each time slot T_j, consider what happens if you put A_i into T_j. There will be a list of other activities which don't agree with this move (for instance, activity A_k is on the same slot T_j and has the same teacher or same students as A_i). Keep a list of conflicting activities for each time slot T_j.

       2 b) Choose a slot (T_j) with lowest number of conflicting activities. Say the list of activities in this slot contains 3 activities: A_p, A_q, A_r.

       2 c) Place A_i at T_j and make A_p, A_q, A_r unallocated.

       2 d) Recursively try to place A_p, A_q, A_r (if the level of recursion is not too large, say 14, and if the total number of recursive calls counted since step 2) on A_i began is not too large, say 2*n), as in step 2).

       2 e) If successfully placed A_p, A_q, A_r, return with success, otherwise try other time slots (go to step 2 b) and choose the next best time slot).

       2 f) If all (or a reasonable number of) time slots were tried unsuccessfully, return without success.

       2 g) If we are at level 0, and we had no success in placing A_i, place it like in steps 2 b) and 2 c), but without recursion. We have now 3 - 1 = 2 more activities to place. Go to step 2) (some methods to avoid cycling are used here).


Lalesculiviu (talk) 12:34, 7 February 2011 (UTC)

Lalesculiviu (talk) 15:19, 3 June 2011 (UTC)

Lalesculiviu (talk) 15:20, 3 June 2011 (UTC)

## In disagree with the Theorem of Cook and its definition

If we want to understand why can I ensure that Cook’s Theorem is false, firstly we will have to read his own transcription: The Cook’s Theorem, in the refereed book, begins defining the class NP-Complete (page 37) like the set of the NP problems which can be transformed in the rest of NP with a code which does not exceed in resources of time more than polynomially (respecting size of entry). In this way, if a NP-Complete could be solved in “poly-time”, then every NP could get an implementation (configuration in a Turing Maching) which costs are “poly-time” too; for getting the knowledge NP=P.

Saying that, the definition sounded very much interesting; but, of course, the objective of Cook was to demonstrate that almost a problem configurable in NP exists with that property. The demonstration of existence of that initial problem NP-complete is the proper Cook’s Theorem itself (page 39).

The reasoning done by Mr Cook in this Theorem was to assert the problem of logic satisfiability could be coded in a reasonable scheme with a languaje LSAT. That languaje distingishes the entries which gets a boolean acceptation into the formula it represents. That is, through a Non Deterministic Turing Machine we could solve in a “poly-time” whether the formula described in the languaje satisfies or not.

After presented LSAT, now we will have to ensure that every languaje L in NP could be solved if LSAT was solved before in a concrete bounded time. To get that objetive, we will use a function fL(x) whose property is to recognize every x in L if and only if fL contains an assigment that satisfies the boolean formula. We will understand, therefore, that x is an accepted entry by every NP and it is connected to SAT using f. Solving that connection we will solve in that bounded time every NP.

The work of this function will be to map each poosibility in his own Turing Machine and, considering the “poly-time”, our objetive will be to demonstrate only the existence of a formula well formed with the same force of the presented problem. And, in fact, the process of demonstration continued in this way: Considering the machine if not deterministic, the bound will be fixed by a poly got by the size of the entry (page 40), so as Mr. Cook continued as: “This will enable us to describe such a computation completely using only a limited number of Boolean variables and a truth asigment to them”. At this very point everyone could say “yes, in Theory”, Can we ensure every problem NP imagined by us (independientelly of its hardness) will be able to translate to a boolean formula representing the same problem? When Cook continued with that assertion he mentioned a “poly – quantity” of logic variables completely undefined; from a point of view of mathematician formalism, those variables could be consideared well defined, however we have to get more rigorousity: Does exist a correspondence between the original problem and the formula well formed for constructing? As saying as, is it constructible? More especifically, constructing the formula with every variable then we only will recognize the existence of those variables as formulations in order 0 (without any kind of unification) executing the problem L into the solving machine. That is, if we want to construct the formula well formed before we will have to know the solution of the problem.

This is, basically, the error in the Theorem of Cook: to get constructed the first NP-complete we have to solve before each possible NP problems. Obviously that is unavaliable, because solving the boolean satisfiability we will have no posibilities of finding a code enabled to convert the problem to every other NP.

In other hand, besides the theorem of Cook ensured the existence of that code, I do not know any evidence of itself. That enlighted code will approach every small advances on the boolean validity for solving every NP problem in general; we can imagine what would be happen if that code would exist, speaking about a lot of benefits. So, if we do not understand exactly why the Theorem is false, we will can smell that there is no other way. Moreover, could not think we in a resolution of a problem of the Principia Mathematica as a bounded problem in time? The Cook’s Theorem does not specify exactly the border. It only referees bound is polynomial (without any kind of expression delimitating the use). Well, let’s change the word polynomial to exponential, or only bounded (it halts). If we have a specification desbribed from axioms of the Principia Mathematica, that specification could be demonstrable in a bounded time by any Turing Machine, to get an omega coherency (almost if we use a reasonable algorithm of unification). Then, if we combine the reasons of the Theorem of Cook with an application on Principia Mathematica, we will always find a formula for satisfying boolean logic of order 0 to satisfy a formulation of the Principia. So, that is a contradiction with the two most known results of Gödel: theorems of completeness and completeless.

That is the reason why the Theorem of Cook cannot be applied in a theoretical point of view. —Preceding unsigned comment added by 79.109.168.68 (talk) 12:03, 7 March 2011 (UTC)

## Exponential time as a misconception?

The misconceptions section lists exponential running time as a misconception about NP-hard problems, claiming that "some NP-complete problems actually have algorithms running in superpolynomial, but subexponential time. For example, the Independent set and Dominating set problems are NP-complete when restricted to planar graphs, but can be solved in subexponential time..." It's not at all clear to me that this is true. Given that all NP problems are reducible to any given NP-complete problem in polynomial time, wouldn't this imply that *all* NP-complete problems are solvable in sub-exponential time? That would require updates to the "Exponential time hypothesis" page, as well as the "time complexity" page, which claims that "All the best-known algorithms for NP-complete problems like 3SAT etc. take exponential time." Furthermore, the cited source specifically contrasts the fact that these restricted problems are solvable in sub-exponential time against the fact that, if we assume the exponential time hypothesis is correct, the general problem cannot be solvable in sub-exponential time because it is NP-complete. This seems strongly to suggest that the restricted problems are no longer NP-complete. I'm reluctant to make the change myself because of the "if you don't have a graduate degree keep your hands off" comment, but could someone with some authority on the topic review this section of the article? --Emertonom (talk) 05:58, 30 May 2011 (UTC)

The article is correct. Even though there is a polynomial-time reduction from problem A to problem B, this does not imply that if we found a sub-exponential time algorithm for problem B, the reduction would yield a sub-exponential time algorithm for problem A. This is because the polynomial time reduction could blow up the size of the instances by a polynomial amount. For example it could map an instance of size n of problem A to an instance of size n3 for problem B. --Robin (talk) 21:00, 30 May 2011 (UTC)

"All instances of an NP-hard problem are hard" is wrong on a much deeper level. Every particular instance is easy, and requires constant time. There are two distinct statements that can be made. (a) some subclasses of the original problem are still NP-hard, while others are not (e.g. KNAPSACK with bounded weights). (b) some instances take more time to solve than others (e.g. instances of 3-SAT with ~1 solution). However it is meaningless to say that solving a particular instance takes exponential time. Also, this is an empirical statement and not a theorem. (I have a relevant graduate degeree, but I'm not a "Wikipedian" so may someone else should change it) 132.65.16.64 (talk) 10:35, 17 October 2012 (UTC)

## Euler diagram wrong.

I'm removing the Euler diagram for reasons I've spelled out on its discussion page. Twin Bird (talk) 23:10, 23 July 2011 (UTC)

Still wrong.... — Preceding unsigned comment added by 173.77.144.116 (talk) 09:30, 19 February 2012 (UTC)

## Who is this guy and what is his thesis doing here?

Can someone expound who Dorn, Frederic is and why is his thesis promoted in the wiki article? I think someone should expound on this statement: "The PhD thesis of Dorn[8] focuses on subexponential time algorithms for NP-hard problems." I think people don't care what the focus is about but rather what his thesis' statement is and how he proved it or argued for it in relation to "Solving NP-complete problems requires exponential time.". — Preceding unsigned comment added by 210.4.96.73 (talk) 00:58, 31 October 2011 (UTC)

I dont think a thesis is a reliable reference to use, and so it should be removed regardless.Millertime246 (talk) 01:03, 31 October 2011 (UTC)
Just to prevent my friend Frederic from loosing reputation here, he is perfectly trustworthy and his thesis is great. He came up with subexponential-time algorithms for NP-hard problems on planar graphs, and these results were published in peer-reviewed journals. Thus the reference to his work does make some sense in this part of the article. However, whoever added this inappropriate sentence did not understand or wanted to troll. ylloh (talk) 16:11, 2 November 2011 (UTC)

## Quantum computers.

Will quantum computers allow solving NP-complete problems in polynomial time on them? I belive not, but I cannot find any references. — Preceding unsigned comment added by 149.156.82.207 (talk) 15:45, 24 January 2012 (UTC)

Quantum computing says "There is a common misconception that quantum computers can solve NP-complete problems in polynomial time. That is not known to be true, and is generally suspected to be false." and gives the reference Bernstein, Ethan; Vazirani, Umesh (1997). "Quantum Complexity Theory". SIAM Journal on Computing. 26 (5): 1411. doi:10.1137/S0097539796300921.. Questions like this one should generally be asked at Wikipedia:Reference desk, not in article talk space. —Mark Dominus (talk) 19:03, 24 January 2012 (UTC)

## Trivial problems cannot be NP-complete

A problem is said to be "trivial" if and only if it is the empty set or the set of all input strings. Thus there are exactly two such problems, both in P. These problems are provably not NP-complete, because they lack the requisite accepted or rejected problem instance, which a mapping reduction must be able to provide. Note that poly-time reductions are mapping reductions (Karp reductions), not Turing reductions (Cook reductions).

If P = NP, then every problem in NP would also be NP-complete except for the trivial problems. Therefore we know NP != NP-complete. The right-hand side of this diagram should be updated accordingly.

I don't have my copy of Du and Ko's "Theory of Computational Complexity" handy, but I believe there is an exercise in there illustrating this point.

Workaphobia (talk) 05:38, 29 January 2012 (UTC)

Concur. Until the diagram is corrected, it should be removed. aprock (talk) 23:17, 19 March 2012 (UTC)

## Any NP problem polynomially reducible to NP-Complete?

I feel the statement

"A decision problem L is NP-complete if it is in the set of NP problems so that any given solution to the decision problem can be verified in polynomial time, and also in the set of NP-hard problems so that any NP problem can be converted into L by a transformation of the inputs in polynomial time."

is wrong. The statement means "any NP-Problem can be polynomially converted to NP-complete". Is that right? If so, then P-problems will also be polynomially reducible to NP-Complete problems.

Please correct me if I'm wrong. --Abhinay Vishwakarma (talk) 06:54, 30 January 2012 (UTC)

The statement is correct (though I dislike its phrasing). Any problem in P is poly-time-reducible to any NP-complete problem. The reduction is pretty trivial for problems in P: The reduction essentially is to solve the problem outright, and then at the end, produce an appropriate instance of the NP-complete problem.
Workaphobia (talk) 14:49, 30 January 2012 (UTC)

## Circular definitions

The initial definitions of the NP-hard and NP-complete pages define their subject in terms of each other.

The NP-hard page: A problem H is NP-hard if and only if there is an NP-complete problem L that is polynomial time Turing-reducible to H

The NP-complete page: A decision problem L is NP-complete if it is in the set of NP problems and also in the set of NP-hard problems.

I don't know enough on this subject to be sure how to fix it but I think at least one of the definitions can be expanded to not use the other.

172.4.233.15 (talk) 17:27, 7 April 2013 (UTC)

The NP-hard article is fairly confusing. The definition should be that a problem is NP-hard if every NP language can be reduced to it, and moreover, it needs to use many-one rather than Turing reductions.—Emil J. 14:40, 8 April 2013 (UTC)

I just made the same comment on the NPH page before noticing this. The intro here says "A decision problem L is NP-complete if it is in the set of NP problems and also in the set of NP-hard problems". This may be true but it is pretty useless as either a definition or an explanation if NPH is defined only in terms of solving NPC. In the light of what's in the body of this article my uninformed guess is that what they need to say is that NPH includes any problem whose solution can be used to solve all NP problems (in poly time for each of them), and that NPC is the subset of NPH problems that are actually themselves in NP. If that's right then what I'd say here is something like "A decision problem L is NP-complete if it is in the set of NP problems and can be used to resolve any other NP problem in only polynomial additional time. (Problems with the latter property are called NP-Hard so NPC is just the intersection of NPH with NP itself)" and on the other page I'd say "A problem H is NP-hard if and only if every NP problem is polynomial time Turing-reducible to H." Of course it follows that H2 is guaranteed to be NPH if there is some H1 in NPH which is reducible to H2 and in particular if there is an NPC problem which is so reducible (but I'm not sure about the "only if" in the claim now on the NPH page unless we know that NPC is nonempty).96.49.194.82 (talk) 02:01, 5 July 2013 (UTC)

## Misconception of "If P=NP all cryptographic cyphers can be broken"

Yes, this is a misconceptions, but the following statement from the article is false:

For example, ciphers with a fixed key length, such as Advanced Encryption Standard, can all be broken in constant time (and are thus already in P), though that constant may exceed the age of the universe.

There is no constant time solution to AES (and they are not already in P), but this doesn't explain and address the actual reason cryptographic cyphers wouldn't necessarily be broken. If the solution found to an NP-complete problem is O(N^32) then that solution is still polynomial and therefore P=NP. However, the solution would be just as complex as brute force, O(2^N), for a 256 bit key (~2^256 = ~256^32). In that same instance a 2048 bit key would be significantly less secure (~2^2048 vs ~2048^32) -- equivalent to "only" 352 bits. This can be extended to a constant time solution (possibly already disproven, not sure) with a constant value of 1x10^78 operations. In this constant value solution case, however, increasing key length would not be helpful at all! Reduction to an 8 bit key would of course still be problematic from the perspective of a brute force attack, but an increase to a 2048 bit key would be no better than a 256 bit key against the constant time attack. 74.254.239.180 (talk) 15:47, 24 March 2014 (UTC)

The brute force solution is a constant time solution for a fixed key length; in the case of AES-256, a solution takes at most 2^256 tries. Your other point, if I understand you correctly, is that a polynomial time algorithm can be just as secure if the the degree of the polynomial is large enough. I agree that this should be in the article, but it would be best if we had a source we could quote. Also, as I understand it, if an O(N^32) solution were found to one NP-complete problem, while it would of course prove P=NP, it would not imply that all NP problems can be solved in no more than O(N^32). The best you could say is that any other NP problem can be solved in no more than O(N^(32+K)), where K is non-negative and depends on the specific problem. That is how I understand what polynomial equivalence of NP-hard problems means.--agr (talk) 18:23, 27 March 2014 (UTC)

## The solution without a reference

// Y1+

 //  =/=N0



M1

N0 never produces an output. The algorithm always produces a “M1” result when the answer is “Yes” from the oracle queried, and never produces a result when the answer is “N0” even to the other possible phrasings of the same query. It is striking how easy it is to miss the ingenuity of our programmers here. Most of the solution was already present as much as the pioneers and experts in the field surmised and asserted possible. Particularly, the comment on Fermat’s Last Theorem is relevantly applicable, such deep theories are in many instances required to solve largely thought unsolvable problems, especially in cases when most of the solution and the proof in the word was already known to the experts in the field.

The algorithm answers “Yes” with “M1” when the output produced from a correctly asked question produces never a “No” (written as N0) result, but only a “Yes” result. Incorrectly stated questions are they that cannot be resolved because they result in a “N” or “No” output never producing m by the computer given an incorrectly asked question. Given a correctly asked question, the computer will always return a “M” output eventually. But given the complexity of the question and the known variables the time required to solve an NP-Complete problem depends entirely on the specific incidence and construction of this algorithm. To illustrate the functions here is a clear example:

Question correctly asked: “Does the remote control not need new batteries?”

ALGORITHM says check the button function, if it changes the channel on the TV or the volume of the TV the computer monitoring the remote control produces a “M” result. The conclusion is output: Yes means no.

Incorrectly asked question seeking an NP-Complete problem: “Does the remote control have new batteries?”

If the check of the remote control does reveal a failure of the button to change the channel or volume of the TV the answer is the conclusion is never output because this would be a “No” or “N” response.50.20.186.81 (talk) 12:52, 2 June 2014 (UTC)

Hello fellow Wikipedians,

I have just added archive links to one external link on NP-completeness. Please take a moment to review my edit. If necessary, add {{cbignore}} after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}} to keep me off the page altogether. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}).

You may set the |checked=, on this template, to true or failed to let other editors know you reviewed the change. If you find any errors, please use the tools below to fix them or call an editor by setting |needhelp= to your help request.

• If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
• If you found an error with any archives or the URLs themselves, you can fix them with this tool.

If you are unable to use these tools, you may set |needhelp=<your help request> on this template to request help from an experienced user. Please include details about your problem, to help other editors.

Cheers.—cyberbot IITalk to my owner:Online 20:16, 9 March 2016 (UTC)

## Diagram problems

"To the right is a diagram... in this diagram, an arrow from one problem to another indicates the direction of the reduction." There are no arrows on the diagram. Is this an attempt at Monty Pythonesque humour? A hidden IQ test for computer science undergraduates? Honestly, the article is opaque enough without this kind of nonsense. Ross Fraser (talk) 06:25, 23 March 2016 (UTC)

No doubt it made more sense with the original image there, File:Relative NPC chart.PNG, which looks much the same but with arrows rather than lines connecting the ovals. It was uploaded in 2003, replaced by another image with the same name and not quite as rough an appearance in 2004, replaced by the present image in 2011 (even though that image was uploaded in 2008 and appears to have no purpose other than to be such a replacement) and deleted in early 2012 as lacking in any copyright information. Anyway, please be WP:BOLD and edit that part of the text so that it makes sense again. —David Eppstein (talk) 07:12, 23 March 2016 (UTC)