27

RP is the class of problems decidable by a nondeterministic Turing machine that terminates in polynomial time, but that is also allowed one-sided error. P is the usual class of problems decidable by a deterministic Turing machine that terminates in polynomial time.

P = RP follows from a relationship in circuit complexity. Impagliazzo and Wigderson showed that P = BPP follows if some problem that can be decided in deterministic exponential time also requires exponential size circuits (note that P = BPP implies P = RP). Perhaps due to these results, there seems to be a feeling among some complexity theorists that probabilistic reductions can probably be derandomized.

What other specific evidence is there that P = RP?

R B
  • 9,448
  • 5
  • 34
  • 74
András Salamon
  • 19,000
  • 3
  • 64
  • 150

4 Answers4

14

The existence of problems in DTIME(2^O(n)) which require exponential-size circuits to compute (which is the assumption in IW) seems plausible since otherwise we would have non-uniformity giving a speedup on EVERY computational problem -- which goes completely against the current thinking that does not see a "too significant" gap between uniform and non-uniform complexity for "normal" problems. This thinking comes from the fact that there are very few examples where a "non-uniform" algorithm is known that is significantly better than the known uniform one (again except for derandomization).

Another piece of "evidence" is that relative to a random oracle we do have P=BPP.

Noam
  • 9,369
  • 47
  • 58
  • I thought that was the precise paper I mentioned in the original question. What am I missing? – András Salamon Aug 17 '10 at 17:50
  • oops, i guess i didn't read the question all the way...

    The reason that the assumption is plausible is that otherwise we would have non-uniformity giving a speedup on EVERY computational problem -- which goes completely against the current thinking that does not see a "too significant" gap between uniform and non-uniform complexity for "normal" problems.

    – Noam Aug 17 '10 at 18:37
  • 1
    edited the response now --- still getting to know the system... – Noam Aug 17 '10 at 18:59
11

Any concrete derandomization result gives evidence that P=BPP. As such PRIMES in P (Agrawal-Kayal-Saxena'02) is one good example. Generally, there are few natural problems in BPP that are not known to be in P (Polynomial Identity Testing is one notable exception.)

Similar in spirit to the result you mention, Hastad-Impagliazzo-Levin-Luby '99 showed that the existence of one-way functions implies the existence of pseudorandom generators. While this does not directly imply P=BPP based on the existence of one-way functions, it does show that pseudorandom generators follow from minimal cryptographic assumptions. This may be seen as another hint that BPP is not more powerful than P.

Moritz
  • 1,714
  • 15
  • 21
8

It's important to note that saying "probabilistic reductions can [probably] be derandomized" is much stronger than P=RP. In fact, one formalization of the notion of derandomizing all randomized reductions is that $P^X=RP^X$ relative to every oracle $X$, which we know is false (e.g. Heller. Relativized polynomial hierarchies extending two levels, Mathematical Systems Theory 17(2):71-84, 1984 gives an oracle where $ZPP=RP=EXP$ which is not equal to $P$ by the Time Hierarchy Theorem).

The above is, of course, talking about derandomizing randomized polynomial-time Turing reductions, rather than the usual polynomial-time many-one reductions. I wouldn't be surprised if Heller's oracle can be adapted to give a set X such that for all Y, Y is exponential-time many-one reducible to X iff Y is RP-reducible to X, but without going through the construction I couldn't swear to it.

Joshua Grochow
  • 37,260
  • 4
  • 129
  • 228
5

Valiant and Vazirani showed in 1986 that there is a randomized reduction of SAT to $USAT_Q$, which is the decision problem based on SAT where only the difference between satisfiable and unsatisfiable instances matters. If $Q = \bot$ is the false predicate, then $USAT_{\bot}$ is the problem of deciding whether there is precisely one solution.

A solution to a Boolean formula $\phi$ is a (0,1)-vector assigning truth values to the free variables in $\phi$, so that $\phi$ is satisfied. A $k$-isolated solution $x$ to $\phi$ is a solution, with the additional property that any other solution differs in more than $k$ values. (Alternately, $x$ is a $k$-isolated solution if the Hamming distance of $x$ to any other solution exceeds $k$.)

Let $k$-ISOLATED SAT be the problem which requires deciding whether the input CNF formula has a solution that is $k$-isolated. If $n$ denotes the number of variables in an instance, then $USAT_{\bot}$ and $n$-ISOLATED SAT are precisely the same problem.

For any $\epsilon > 0$, by duplicating each variable polynomially many times, a deterministic reduction can be made from SAT to $n^{1-\epsilon}$-isolated SAT. (Details are available here.) This seems to provide further evidence that the gap between deterministic and randomized reductions is "small".

András Salamon
  • 19,000
  • 3
  • 64
  • 150
  • I would say this is evidence against P=RP. Analogously, the assumption $P\neq NP$ is supported by the fact that many good approximation algorithms are matched by NP-hardness results - if $P=NP$ it is difficult to explain why the techniques stop exactly at this small or non-existent gap. – Colin McQuillan Nov 22 '12 at 10:56
  • @Colin: No comment. :-) – András Salamon Nov 24 '12 at 17:34