9

Suppose P != NP.

We know that we can make easy instances of 3-SAT at any time. We can also generate what we believe to be hard instances (because our algorithms can't solve them quickly). Is there anything preventing the set of hard instances, from being arbitrarily small, so long as for any given instance size (n) there are only Poly(n) (or even constant) instances of size Poly(n) or smaller?

For any hard 3-SAT instance, we would have to add the set of all 3-SAT instances it reduces to via looping through the NP-Completeness reduction cycle, but I don't foresee this adding to the number of hard instances very much.

In this world, we could construct an algorithm that polynomially solves all NP complete problems, except an exceptional few.

Edit: A softer variant of the question: even if we showed P!=NP, how could we know whether a given method to generate size n 3-SAT problems actually generated hard one with some requisite probability? If there's no way to know from P!=NP alone, what is required to show that we can generate a hard NP-complete problem?

Elliot JJ
  • 745
  • 2
  • 8
  • 12
  • 4
    Yes. NP-complete problems are hard in the worst-case. It is possible that the majority of instances of an NP-complete problem are efficiently solvable. However, Russell Impagliazzo proposed a world (Pessiland) where average-case NP-complete problems exist but one-way functions do not exist. In this world, we can not generate hard instances of NP-complete problem with known solution. – Mohammad Al-Turkistany May 18 '13 at 10:23
  • So what I'm really asking about is the one-way function problem? How large is the evidence for one-way functions? – Elliot JJ May 18 '13 at 10:40
  • Short answer, $P \ne NP$ assumption alone is not helpful. We need at least the existence of one-way functions. – Mohammad Al-Turkistany May 18 '13 at 10:42
  • As for the evidence for one-way functions, see this post: http://cstheory.stackexchange.com/questions/8829/arguments-for-existence-of-one-way-functions – Mohammad Al-Turkistany May 18 '13 at 10:52
  • 5
    If the set of hard instances of each length is polynomially small then NP is contained in P/poly. There are also other ways of looking at this, search for HeurP. – Kaveh May 18 '13 at 10:56
  • 2
    This question seems to address your edit -- we can (determinstically) generate hard instances of SAT if and only if unary $\mathsf{NP} \neq$ unary $\mathsf{P}$. – usul May 18 '13 at 12:35
  • I think it is possible that all NP problems are in non-uniform P, while uniform NP is not equal to uniform P. Non-uniform means that we parametrize the algorithm by the input size, and allow them to change with the input size. This makes things much more difficult - people looked at this in the context of circuits. – Sariel Har-Peled May 19 '13 at 04:00
  • 1
    @SarielHar-Peled In particular, NP $\subseteq$ P/poly collapses PH to the second level, which is consistent with P != NP. – Suresh Venkat May 19 '13 at 05:23
  • 2
    There is no known way to connect worst-case and average-case hardness of NP. However there are ways to connect "mild" average-case hardness to "strong" average-case hardness. My thesis is a starting point for both. http://www.ccs.neu.edu/home/viola/papers/thesis.pdf – Manu May 19 '13 at 14:43
  • seems to me the big part of this question is around technically defining "large hidden subset". and its possibly more talking about the density of easy vs hard instances. also feel this question is largely answered by research on the SAT transition point. – vzn May 21 '13 at 15:13

3 Answers3

12

1) Depending on exactly what was meant, the conclusion in Kaveh's observation can be strengthened from $\mathsf{NP} \subseteq \mathsf{P/poly}$ to $\mathsf{P} = \mathsf{NP}$, essentially using Mahaney's Theorem. That is, if there is an algorithm which solves SAT and runs in time $\leq p(n)$ on all instances of length $n$ except for possibly $q(n)$ such instances, where $p$ and $q$ are both polynomials, then in fact $\mathsf{P} = \mathsf{NP}$. See, e.g. Meyer and Paterson and references therein, or Schoning's monograph "Complexity and Structure". So if this captures your notion of "hard instances" then there must be more than $poly(n)$ many hard instances for each $n$, assuming $\mathsf{P} \neq \mathsf{NP}$.

FYI, such algorithms are sometimes referred to as "apt" or "APT" algorithms, for "almost polynomial time" (not to be confused with the more modern complexity class $\mathsf{almostP}$, which happens to equal $\mathsf{BPP}$).

2) The above can be strengthened even further, as follows. Assume $\mathsf{P} \neq \mathsf{NP}$. Then the above says that for any algorithm solving SAT and any polynomial $p$, there is a set of instances of super-polynomial size on which the algorithm takes more than $p(n)$ time. But the set can depend on the algorithm.

The stronger result switches the quantifiers, and concludes: there is a super-polynomial size set H (for "hard") such that for any algorithm A solving SAT and any polynomial p, A takes more than $p(n)$ time on all but finitely many elements of H. Such an H is called a complexity core (the size assumption is not part of the definition of complexity core). The definition and existence of complexity cores was given by Lynch. The result I just quoted is proved by Orponen and Schoning.

Joshua Grochow
  • 37,260
  • 4
  • 129
  • 228
0

Nobody has mentioned Impagliazzo's famous "Five worlds" paper.

http://www.cs.ucsd.edu/users/russell/average.ps

none
  • 1
-3

another angle on this question (outside of the ref to Mahaney's theorem). the "transition point" in SAT is a study of this phenomena of easy vs hard instance distribution esp around a "critical point" where probability of hard instances is maximized. the literature on the subject is long and complex. it has both empirical and analytical approaches. it has deep ties to physics/thermodynamics.[3] unfortunately there is currently no Wikipedia entry on this very significant and fundamental complexity theory topic. also, there do not seem to be many overall or "standard" surveys on the subject. here is one recent ref to start on SAT[1] and TCS phase transitions in general.[4] your question also falls into a category of "a really good answer would basically be nearly a P$\stackrel{?}{=}$NP proof."

Is there anything preventing the set of hard instances, from being arbitrarily small, so long as for any given instance size (n) there are only Poly(n) (or even constant) instances of size Poly(n) or smaller?

again Mahaney's theorem (phrased in a slightly different way) answers this directly. another way of looking at this is that attempts to narrow the distribution of instances in some key/characteristic way leads to NP-complete functions. an example of this from monotone circuit complexity is "slice functions".[2]

[1] Predicting Satisfiability at the Phase Transition Lin Xu, Holger H. Hoos, Kevin Leyton-Brown

[2] Paul E. S. Dunne: The Complexity of Central Slice Functions. Theor. Comput. Sci. 44: 247-257 (1986)

[3] Analytic and Algorithmic Solution of Random Satisfiability Problems M. Mezard, G. Parisi, R. Zecchina

[4] Phase transitions in NP-complete problems: a challenge for probability, combinatorics, and computer science by Moore

vzn
  • 11,014
  • 2
  • 31
  • 64