17

Teaser

Since the problem is longish, here is a special case that captures its essence.

Problem: Let A be a deterministic algorithm for 3-SAT. Is the problem of completely simulating the algorithm A (on every instance of the problem). P-Space hard?

(More precisely, are there reasons to believe this task is P-Space hard, does something in this direction follow from standard CC conjectures, and is there hope to prove that this task is X-hard for some complexity class X which is presumed to be strictly above NP.)

Related questions: are-pspace-complete-problems-inherently-less-tractable-than-np-complete-problems;

EDITED UPDATE: There are various interpretation for "Completely simulating A". And there may be different interesting answers according to the interpretation. (Also Ryan Williams proposed an interpretation for simulating a non deterministic algorithm.) For a certain way to associate a decision problem to the computational task "Completely simulate A", Joe Fitzsimons found an algorithm A for which this associated decision problem is still in NP. If "completely simulate" refers to being able to output the entire register of the computer at a given step $i$ then for Joe's algorithm it seems that $P^{NP}$ is what is needed. For this version (I think, but am not sure) Ryan's answer sketches a $P^{NP}$-hardness argument. Joe remarked that if of you are required to supply the entire registers (which is not any more a decision problem) it is not surprising that you need to step up and the complexity classes are not the same.

Anyway, if we require to output the state of the registers at a prescribed step $i$ then the answers of Ruan and Joe suggests (but again, I am not sure about it) that $NP^+$ is essentially $P^{NP}$. We can speculate that by this interpretation the operation pushes up one step higher in the polynomial hierarchy, and that $PH^+ =PH$.

In any case by these interpretations the answer to my teaser question is NO.

I had a more drastic interpretation for "completely simulating an algorithm A" in mind. (But perhaps Joe's and Ryan's interpretation is more interesting.) My interpretation by "completely simulating algorithm A" is that you output the state of the registers at every step $i$. In particular, if the algorithm is not polynomial your output is also not polynomial. Under this drastic interpretation I wondered if we should believe that for every algorithm A, $C_A$ is P-SPACE hard, and what can we prove.

Motivation:

This question was motivated by a lecture by Paul Goldberg (slides, video, paper) describing a paper with Papadimitriou and Savani. They showed that it P-space complete to find any equilibria that are computed by the Lemke-Howson algorithm. The problem to find some equilibrium point is only PPAD-complete. This gap is quite amazing and similar results are described already in Papadimitriu's well-known paper: The Complexity of the Parity Argument and Other Inefficient proofs of Existence (1991). (It is known that PPAD-complete problems cannot even be NP-hard (unless terrible things happen so this is far down in the complexity world compared to P-space).

What the question is about

My question is about similar gaps for even older and more classical computational complexity problems. (Maybe this is already familiar.)

Given a computational problem we can distinguish between three issues:

a) Solve the problem algorithmically

b) Reach the same solution as a specific algorithm A

c) Simulate the entire algorithm A

Of course c) is at least as hard as b) which is at least as hard as a). The results mentioned above shows a gap between the computational difficulty of tasks a) and b) for the problem of computing equilibria. We would like to understand the situation (and mainly the gap between a) and c)) for other computational problems.

The question:

The basic form of the question with an example

We start with a computational problem, Problem X

An example can be

Problem X: Solve an instance of SAT with n variables

we also specify

A: an algorithm that performs Problem X

and we pose a new problem

Problem Y: Exactly simulate algorithm A

and we are interested in the computational difficulty of Problem Y. We wish to understand the class of such problems Y for all algorithms A that solve the original Problem X. Especially we want to know how easy can problem Y be (or how hard must it be) if we are allowed to choose the algorithm A at will.

The proposed operation on complexity classes

Start with a complexity class $C$ which is described by some computational task. Given an algorithm A to perform every instance of this computational task, consider a new complexity class $C_A$ which is described by the computational task of completely simulating $A$. Then we can (hopefully) define an "ideal" of complexity classes

$C^+ = \{C_A:$ for all algorithms A}.

If we let $P$ to describe whatever a digital computer can do in polynomial time (so I don't want to restrict attention e.g. to decision problems) then $P^+$ is the ideal spanned by $P$ itself.

Finally, My Questions

My questions are:

1) Does the definition make sense (in the wide sense of the word sense). Is it well known or the same as (or similar to) some well known thing. (My formulation was informal and in particular when we move from specific problems like SAT to a complexity class like NP we have to worry about various things that I neglected.)

The next two questions assume that the definition can make sense or salvaged to make sense.

2) Suppose we equip ourselves with all the standard conjectures regarding computational complexity. Can we say what $C^+$ is supposed to be for some familiar complexity classes. (E.g. $C=NP$, $C$=P-space,..)? EDIT: Several people pointed out that $PSPACE^+=PSPACE$. So > we can ask instead what is $(P^{NP})^+$? is $PH^+=PH$?

Can we guess what are the complexity classes $C$ so that $C^+$ is the ideal spanned by $C$?

So the question how easy can the computational task of simulating an algorithm A for 3-SAT (when we can choose the algorithm to make it as easy as possible) is an interesting special case.

3) Is there hope to actually prove something about this operation?

Of course, if you prove that all complexity classes in $NP^+$ are P-space hard this will show that $P=NP$ implies $P=PSPACE$, which (I think) would be a huge and highly unexpected result. But if you show that all complexity classes in $NP^+$ are hard to something say in the third level of the polynomial hierarchy (e.g. $\Delta_3^{\bf P}$) this would only imply things that we already know, things that follow from the fact that $P=NP$ causes PH to collapse.

Glorfindel
  • 359
  • 1
  • 4
  • 10
Gil Kalai
  • 6,033
  • 35
  • 73
  • 3
    Interesting question! But: "Of course a) is at least as hard as b) which is at least as hard as c)." Shouldn't the order be the other way around? – Bart Jansen Aug 17 '11 at 09:59
  • 2
    Is it possible to include a link or maybe a brief description of what it means to 'simulate the entire algorithm A'. Like, what is the difference between 'simulate' and 'run' in this case? Is it possible to simulate an algorithm faster than its running time? – Artem Kaznatcheev Aug 17 '11 at 23:17
  • 1
    Dear Artem, by simulating an algorithm on a specific instance I mean describing the entire evolution of the algorithm. (So maybe its like "running" the algorithm.) You cannot simulate the algorithm faster than its running time. It is not a standard notion (to my knowledge) so I cannot give links or references (but hope to get link and references.). Simulating the algorithm is different than just the computational task of "give the same output as algorithm A" which is related to the motivation and task b) described in the question. – Gil Kalai Aug 18 '11 at 05:48
  • 3
    Dear Gil, I am failing to see why we cannot simulate an algorithm $A$ with resources of the same order as $A$ uses. Unless some of the resources are more restricted we can just simulate whatever $A$ does. Let me see if I understand the motivation part correctly: We have a problem $Q$ in class $C$. $A$ is an algorithm possibly outside $C$ solving $Q$. Although finding a solution for $Q$ can be done in $C$, finding one of the solutions that $A$ finds can have complexity outside $C$. Am I understanding the motivation part of the post correctly? – Kaveh Aug 18 '11 at 17:16
  • ps: I don't understand the notion of "algorithm $B$ completely simulates algorithm $A$" clearly enough, but it seems to me that $C^+$ will contain problems which are of arbitrary high complexity because $A$ can have arbitrary high complexity. – Kaveh Aug 18 '11 at 17:26
  • Dear Kaveh, "C + will contain problems which are of arbitrary high complexity because A can have arbitrary high complexity." Sure this is why I refer to C^+ as an ideal. of course, we are interested to understand how low can C_A be. – Gil Kalai Aug 18 '11 at 19:10
  • 1
    Regarding your first remark. Take C=NP. Let A be an algorithm, say for 3-SAT. we want a program that simulates A on every instance where by simulate you mean that your progrem will tell you exactly at each computer cycle the state of the computer. It looks to me that if A is an exaustive search then C_A can be simulated by P-space. I dont see how CA can be simulated by a non deterministic polynomial time computer. Maybe the truth is that for every algorithm A for 3-SAT the task of simulating A is P-space hard. – Gil Kalai Aug 18 '11 at 19:15
  • 1
    "Although finding a solution for Q can be done in C , finding one of the solutions that A finds can have complexity outside C . Am I understanding the motivation part of the post correctly?" This is not the question. If you just want to find the solution that an algorithm for 3-SAT (as a secision problem) does all you need is the power of NP. But to rum the entire algorithm (which we believe must be exponential time is a different matter. – Gil Kalai Aug 18 '11 at 19:20
  • 1
    (cont) For decision problems there is no difference between (a) and (b). For search problems and other non-decision problems your understanding of the question is related to the computational task (b). (And to Goldberg et als paper), but I am intersested in the computational task (c) which is even more ambitious. – Gil Kalai Aug 18 '11 at 19:21
  • 1
    A more general comment in a different direction: In my reading of Goldberg/Papadimitriou/Savani, the primary motivation is in fact a critique of the homotopy method. What is the content of this critique? In particular, they comment that the various fixpoint problems had no known polynomial-time algorithm, and then point towards an explanation for this (unfortunate) situation by the results of Daskalakis/Goldberg/Papadimitriou and Chen/Deng/Teng (Daskalakis's PhD thesis is another excellent source). In short, of course, computing Nash equilibria (even in ideal conditions) is now known to be – Daniel Apon Aug 18 '11 at 22:38
  • [cont] PPAD-complete. They then note that the overwhelming majority of methods known to compute Nash equilibria have, understandably, been shown to require exponential time in the worst case. The one exception (or at least the one that they highlight in their recent paper) is the homotopy method, which seems to have resisted attempts at complexity classification. Thus, in the current paper, they show that implementing the given homotopy algorithms are PSPACE-complete. The critique, then, is that the underlying problem should only require the power of PPAD-completeness, but the homotopy-based – Daniel Apon Aug 18 '11 at 22:41
  • [cont] algorithms require a PSPACE machine. In essence, they give evidence that homotopy method is not an ideally-efficient algorithm for solving various fixpoint problems. -- What does this have to do with the current question? Well, the power of the critique lies in the fact that, given current knowledge, it is more likely that a better algorithm exists than those based in the homotopy technique. (And assuming standard complexity conjectures, it is therefore known that a better algorithm must exist.) This brings me around to the "gap" between PPAD-completeness and – Daniel Apon Aug 18 '11 at 22:43
  • [cont] PSPACE-completeness implicit in the question. I'm not convinced (as Artem and Kaveh suggest), even assuming we can properly "tweak" the given definition of the ideal of a complexity class (which is very interesting in its own right), that simulation of NP algorithms (even under a non-standard definition of "simulation") will require the power of PSPACE in the 'best' case (i.e. "getting as low as possible" in the worst case). So, for example: Could you explain your intuition for why NP simulation would be PSPACE-hard? [[Sorry for the long, long comment.]] – Daniel Apon Aug 18 '11 at 22:46
  • The exhaustive search algorithm $A$ for 3SAT is not (known to be) in polytime, so we would not expect that we can simulate it in polytime, on the other hand, if $A$ is in DTimeSpace(t,s), then $C_A$ will be in DTimeSpace(O(t),O(s)) also.

    About the simulation, since the simulating algorithm returns "the state of the computer", are we assuming that the algorithm $A$ is deterministic? And are the inputs to the simulating algorithm an input for $A$ and a cycle number?

    – Kaveh Aug 19 '11 at 08:10
  • If the cycle number become too big it can make the simulation easy, e.g. take an algorithm $A$ and build a new algorithm $B$ that works exactly the same way that $A$ works except that it counts up to $\exp(\exp(n))$ between $A$'s $n$-th and $(n+1)$-th steps. Of course this possibility can be avoided if we care only about the complexity of simulation in the size of original input, but that might create new problems. ps: Btw, it might look nicer if you use $S_A$ (for the task of completely simulating the algorithm $A$) in place of using class $C_A$. – Kaveh Aug 19 '11 at 08:24
  • 2
    Yes yes we are assuming that the algorithm A is deterministic! I don't have a clear intuition why we should expect that simulating every deterministic algorithm for 3-SAT is P-space hard. This is the question! I wanted to see what experts think. – Gil Kalai Aug 19 '11 at 09:20
  • 1
    I agree that DTimeSpace(t,s) appears to represent "fixed points" for my operation. But there are many complexity classes of different nature. – Gil Kalai Aug 19 '11 at 09:22
  • Dear Daniel, indeed GPS regards their result as a computational theoretic critique to homotopy methds for computing Nash equilibrium. This adds to PPAD-completeness results about computing Nash equilibrium in general. Your interpretation is that this result suggests that better algorihms exist and in fact you think that better algorithm must exist. But we can ask perhaps in some sense no better algorithm exists. (to be continued...) – Gil Kalai Aug 19 '11 at 11:43
  • 1
    Suppose that you describe any deterministic algorithm Y to compute Nash equilibrium. And I am interesed to simulate this algorithm on a computer. Namely given the instance of the problem my computational task is to describe the full computational history of the algorithm Y. Let me ask you: Is it possible that, while P-SPACE is not PPAD, and standard complexity beliefs stand, and yet for every Y this task of completely simulating Y is P-space complete! – Gil Kalai Aug 19 '11 at 11:48
  • 1
    @Gil: From your recent update it seems perhaps there is some misunderstanding about my answer. I was saying that decission versions of the simulation problem are in NP not PNP. The reason I talked about polynomial time with a single call to an NP oracle was for the case where you are required to output the state of the registers, which is clearly not a decission problem, and hence cannot be computed with the same machinery as decission problems. In this case it is hardly surprising that the classes are not the same, as the algorithm you are simulating is for a decission problem. – Joe Fitzsimons Aug 21 '11 at 03:55
  • Dear Joe, Thanks, I see. Is the case where you are required to output the state of the register the same with what Ryan referred to in his answer regarding the problem being $P^{NP}$-hard or is it yet a different variation? If it is the same interpretation then is your result contradicts Ryan statement about the (decision) problem being $P^{NP}$-hard for every algorithm? – Gil Kalai Aug 21 '11 at 07:40
  • Dear Daniel, you wrote: "The overwhelming majority of methods known to compute Nash equilibria have, understandably, been shown to require exponential time in the worst case. The one exception (or at least the one that they highlight in their recent paper) is the homotopy method, which seems to have resisted attempts at complexity classification. Thus, in the current paper, they show that implementing the given homotopy algorithms are PSPACE-complete." What is the relation for an algorithm between having worse case exponential time and requiring P-space nachine? Which is "worse"? – Gil Kalai Aug 21 '11 at 14:45
  • @Gil: I don't see it making sense to talk about outputting the state of a register in terms of P, NP or PSPACE. These are complexity classes for decission problems, not functional problems. But the simulation problem you describe is functional. This is why we need to call the NP machine as an oracle: we need a way to transition to a functional setting, so the problem would be in ${FP}^{NP}$. (It may even be in FNP, but I don't see it straight off). I believe my algorithm works for Ryan's formulation of the decission. – Joe Fitzsimons Aug 21 '11 at 15:01
  • @Gil: As regards Ryan's statement about the problem being $P^{NP}$-hard, I believe it is a counter example, and I believe I know why this is the case. In Ryan's answer he appears to use a polynomial reduction not to a single instance of the simulation, but to several (I'm guessing logarithmically many). This means that if the complexity of simulation is some class $C$, then his reduction shows that $P^C$ is $P^{NP}$-hard, which would be consistent with my answer. I believe he may have overlooked this. That said, Ryan has far more complexity cred than me, so I may be mistaken. – Joe Fitzsimons Aug 21 '11 at 15:06
  • @Gil: Just one last thing: Your notion of exact simulation cannot be done in polynomial space for the algorithm I gave, as it is an exponential time algorithm, and hence a transcript up to a given time is exponential in size. – Joe Fitzsimons Aug 21 '11 at 15:37
  • Dear Joe, regarding your last remark, indeed a complete simulation in "my" sense for an exaustive search algorithm A for SAT (or your algorithm) will require exponential size. On the other hand, I do not see how this fact translates to a statement on computational hardness of this specific task. (E.g., I have no reason to believe it is P-space hard.) So this confuses me a bit (but I certainly have less cred in complexity than any of you guys). – Gil Kalai Aug 21 '11 at 19:51
  • @Gil: what I was thinking was that any deterministic algorithm which requires superpolynomial time cannot be simulated (in your strong sense) in polynomial space. Hence unless P=NP the problem of generating a full transcript for any algorithm for an NP complete problem is outside of polynomial space computation. Perhaps I have misunderstood your meaning though. – Joe Fitzsimons Aug 22 '11 at 01:04
  • 1
    @Gil: I think the answers from myself and Ryan have converged. His lower bound and my upper bound now match at NP$\cup$coNP. Hopefully this answers your question. – Joe Fitzsimons Aug 24 '11 at 17:39
  • 1
    Here is a definition that I think might be closer to what you had in mind by simulation: A simulates B iff there is a mapping from inputs of B to inputs of A and a partial onto mapping from configurations of A to configurations of B and these are easy to compute (e.g. AC^0). This is similar to simulation in state transition systems. – Kaveh Sep 06 '11 at 14:31
  • Dear KAveh, what do you mean by "configurations" – Gil Kalai Sep 09 '11 at 15:32

3 Answers3

12

Problem: Let A be a deterministic algorithm for 3-SAT. Is the problem of completely simulating the algorithm A (on every instance of the problem) P-Space hard?

I don't understand the statement of this problem. But I think there is a natural way to formalize your more general question which may shed a little light on it. Maybe I am completely missing your point, but hopefully you still find something interesting to read here.

1) Is the definition makes sense (in the wide sense of the word sense). Is it well known or the same as (or similar to) some well known thing. (My formulation was informal and in particular when we move from specific problems like SAT to a complexity class like NP we have to worry about various things that I neglected.)

One way to make sense of the task exactly simulate algorithm $Y$ is the following. Let's fix the model to be one-tape Turing machines for simplicity; what I will say can apply to any typical computational model.

For each algorithm $Y$ and input $x$, we can define its computation history $H_Y(x,i,j)$: Given integers $i$ and $j$ which range from $0$ to the running time of $Y$, $H_Y(x,i,j)$ equals the content of the $j$th cell of the tape of Turing machine $Y$ on input $x$ in time step $i$. (And if the tape head is reading the $j$th cell in the $i$th step, include that too along with the machine's state.) Of course, computation histories come up all the time in complexity theory.

Now, one could argue that any algorithm which can decide the language

$C_Y = \{(x,i,j,\sigma)~|~H_Y(x,i,j)=\sigma\}$

(or simulate the function $H_Y$ on all inputs) is solving the task exactly simulate algorithm $Y$, because it has the ability to print every little part of every possible computation of algorithm $Y$. Certainly, given an oracle for $C_Y$ one could do a step-by-step simulation of $Y$.

2) Suppose we equip ourselves with all the standard conjectures regarding computational complexity. Can we say what C+ is supposed to be for some familiar complexity classes. (E.g. C = NP, C = P-space,..)? Can we guess what are the complexity classes C so that C+ is the ideal spanned by C?

This is still an interesting question, under the above proposal. For deterministic time classes, nothing changes. $P^+$ is just $P$: we can decide $C_Y$ in polytime, for every polytime algorithm. Similarly $EXP^+ = EXP$. Also $PSPACE^+$ is still $PSPACE$. For nondeterministic time complexity classes, the situation becomes more interesting. In that case, an algorithm $Y$ can have multiple computation histories, so $H_Y$ is no longer well-defined. However we still want to print some computation history, so our "exact simulation" would have to isolate a specific nondeterministic computation history and then print its values. For an $NP$ algorithm $Y$, one can say that $C_Y \in P^{NP}$: we can use the $NP$ oracle to binary search for the "first" accepting computation history (in lex order), then once we have obtained it, print the relevant bits. For an $NEXP$ algorithm $Y$, one can say $C_Y \in EXP^{NP}$, for similar reasons.

Can we put $NP^+$ in a smaller class? I don't know. Notice we cannot simply redefine

$C_Y = ${$(x,i,j,\sigma)~|~$ there exists a $H_Y$ such that $H_Y(x,i,j)=\sigma$}

to try to put $C_Y$ in $NP$, because we need the history string to be the same for all quadruples involving $x$, in order to "exactly simulate algorithm $Y$".

Anyway, this issue of not being able to "print a witness" to an $NEXP$ computation without going to $EXP^{NP}$ does arise in some situations, such as circuit complexity. If $NEXP$ has polynomial size circuits, then is it also the case that $C_Y \in P/poly$ for every nondeterministic $2^{n^k}$ time $Y$? Impagliazzo, Kabanets, and Wigderson answered this question affirmatively in 2001. Their proof is extremely interesting, invoking tools from derandomization and diagonalization (why would derandomization be necessary for such a result?) and it turns out to be a very useful theorem for proving circuit lower bounds for $NEXP$ functions.

Is there hope to actually prove something about this operation?

Maybe... that depends on whether you are happy with the above formalization of your question. For a deterministic 3-SAT algorithm $Y$, I think the above exact simulation of $Y$ problem would be $P^{NP(1)}$-hard, where $P^{NP(1)}$ is polynomial time with one query to $NP$. (The annoying syntax of StackExchange requires that I write (1) instead of the alternative. Earlier I said $C_Y$ should be $P^{NP}$-hard, but I am not sure of the details: you may see how to generalize the below.)

We give a polytime many-one reduction from every $L \in P^{NP(1)}$ to $C_Y$. Given such an $L$, let $M$ be a machine recognizing $L$ that does a single $NP$ query. WLOG, that query is a SAT formula. Also WLOG, the SAT query can be "postponed" until the very last step of the computation, in the following sense: there is a polynomial time algorithm $A(x)$ which prints a formula $F$ and bit $b$, such that for all $x$,

$M(x)$ accepts iff $A(x)=(F,b)$ such that ($SAT(F)$ XOR $b$) = 1.

($M$ may reject iff $F$ is satisfiable, or it may accept; the bit $b$ captures this.)

For simplicity, let's say when $Y$ ends its computation, it moves its tape head to cell 1, writes "accept" or "reject" in that cell, and loops forever. Then, asking if $(F,T,1,accept) \in C_Y$ for sufficiently large $T$ will tell us if $F$ is accepted. (In general, we just need that it's efficient to compute the instance $y$ of $C_Y$ which will tell us.) Note this already shows that $C_Y$ is both $NP$-hard and $coNP$-hard; the latter is true because $(F,T,1,reject) \in C_Y$ iff $F$ is not satisfiable.

The final reduction from $L$ to $C_Y$ is: given $x$, run $A(x)$ getting $(F,b)$. If $b=0$ then output $(F,T,1,accept)$, else if $b=1$ output $(F,T,1,reject)$.

For every $x$ we are producing (in polynomial time) a $y$ such that $M(x)$ accepts iff $y \in C_Y$.

(Thanks to Joe for demanding that I be clearer about this part.)

But I don't see (for example) how to get $\Sigma_2 P$-hardness. To reduce $\Sigma_2$-SAT to the problem, it would appear you'd need to write a 3-CNF SAT instance $x$ which simulates your deterministic algorithm $Y$ within it (where $Y$ is solving Tautologies on various subproblems). But as $Y$ has no given time bound, that 3-CNF $x$ could be huge, so you don't necessarily get a polynomial-time reduction. Unless I am missing something.

Ryan Williams
  • 27,465
  • 7
  • 115
  • 164
  • Ryan, many thanks for your answer. I am interesting how hard it is to simulate a deterministic algorithm Y for 3-SAT. And the question is if no matter what Y is this is P-space hard. (Your understanding of simulating nondeterministic algorithms as well is also interesting and perhaps is the correct question but I only considered simulating a deterministic algorithms.) – Gil Kalai Aug 19 '11 at 11:32
  • Yes, I thought the last paragraph of my answer would address this part. – Ryan Williams Aug 19 '11 at 18:44
  • I see. Yes indeed it does. I also suspected it might be provably $P^{NP}$-hard which is interesting (but I am not sure if I understand your proof). Do you expect that $P^{NP}$ is the correct answer? I also suspected that proving something beyond $P^{NP}$ would be difficult. Going back from what we can prove to what we should believe, Ryan, what do you think the answer is? – Gil Kalai Aug 19 '11 at 20:06
  • Well, the complexity of $C_Y$ will differ depending on the algorithm $Y$. Some $C_Y$ may be very difficult and others may be much easier. But I think that for every algorithm $Y$, you probably won't do much better than saying "$C_Y$ is $P^{NP}$-hard". (I don't think that for every $Y$ you can get $\Sigma_2 P$-hardness, for the intuitive reason I gave above.) – Ryan Williams Aug 20 '11 at 00:30
  • Ryan, you say that "there is a polynomial reduction from a $P^{NP}$ complete language ... to $C_Y$", but your reduction seems to use multiple instances of $C_Y$. Surely this shows instead that there is a polynomial reduction from a $P^{NP}$-complete problem to $P^{C_Y}$? – Joe Fitzsimons Aug 21 '11 at 15:42
  • Notice I said "polynomial-time Turing reduction" so it is fine to call $C_Y$ multiple times. Also note $P^{P^{NP}}=P^{NP}$ (if that helps). – Ryan Williams Aug 22 '11 at 17:48
  • @Ryan: Yes, I know you use Cook reductions, which is what worries me. I agree with your statement and proof that $P^{NP}$ has a Cook reduction to $C_Y$. However it is incorrect to infer from this that $P^{NP} \in C_Y$. What you have instead proved is that $NP \in C_Y$, since what you are doing is showing that $P^NP \in P^{C_Y}$ (see p2 in www.wisdom.weizmann.ac.il/~oded/PS/CC/l9.ps for example). While this would still hold for $C_Y = P^{NP}$ or harder, it is incorrect to infer it from your proof. I have downvoted because I disbelieve your conclusion, not your proof. – Joe Fitzsimons Aug 23 '11 at 02:36
  • I'll reverse the down vote once you fix the answer, because it is indeed a nice observtion, but I want to flag the error. Alternatively, if someone can point out to me why I'm wrong, I'll happily admit to being an idiot, and reverse the vote. – Joe Fitzsimons Aug 23 '11 at 02:37
  • @Ryan, thanks for the update. It clears up much of my concern (I've removed my downvote), though I am still not following the coNP-hardness argument. Certainly the you can have a bit that is 0 or one depending on whether F is satisfiable, but I don't see a witness for the reject state. Am I missing something? Sorry to keep raising objections, but I am trying to figure out what is causing the discrepancy between our answers. – Joe Fitzsimons Aug 24 '11 at 04:16
  • You have a deterministic algorithm $Y$. It will conclude either $F$ is sat, or $F$ is unsat. To determine if $Y$ will say $F$ is unsat, you can still find this out by checking whether a certain polynomial-time computable string $y(F)$ is contained in $C_Y$. That is, $F \in UNSAT \iff y(F) \in C_Y$. Hence $C_Y$ is $coNP$-hard. I don't know what you mean by "witness for the reject state" – Ryan Williams Aug 24 '11 at 05:54
  • @Ryan: Thanks for the reply. I've just realised that I had misread your formulation of the decission problem. Indeed, if you have to be able to accept depending on whether the bit at a particular location in the transcript is equal to $\sigma$ (as indeed you do in your formulation) then this removes my objection. I agree that this gives a lower bound of NP$\cup$coNP. Indeed I believe in this case that this is tight, since my alogrithm would work given the appropriate oracle. Sorry to have taken up your time with my misunderstanding. – Joe Fitzsimons Aug 24 '11 at 08:24
  • No worries, Joe! – Ryan Williams Aug 24 '11 at 17:09
7

I initially posted an incorrect answer, so hopefully this is an improvement.

I'm going to start out by considering the 3SAT example. In your comment on Ryan's answer you say

I am interesting how hard it is to simulate a deterministic algorithm Y for 3-SAT. And the question is if no matter what Y is this is P-space hard.

The answer to this is that it is not PSPACE-hard for at least some Y, assuming that NP$\neq$PSPACE, but that there exist other algoriths for which it is PSPACE-hard. Showing the latter is trivial: We simply construct an algorithm which interprets the bit string representing the 3SAT formula instead as a TQBF problem, which it then solves before solving the 3SAT instance. Obviously there is nothing special about TQBF in this case, so the algorithm can be made arbitrarily hard to simulate. So we should only care about lower bounds on simulation of any algorithm for a given problem, rather than a specific algorithm.

With that in mind, we construct the following algorithm to solve 3SAT:

Take a register of $n$ bits (initially all set to 0) to contain a trial solution, together with a register containing the 3SAT instance, a counter register of size $\lceil\log_2 (c + 1)\rceil$ initially set to 1 and and two flag bits (call these the fail flag and the accept flag). Here $c$ is the number of clauses. The algorithm then proceeds as follows:

  • If the value of the clause counter $k$ is less than or equal to $c$, and the trial solution is not $111......1$, check whether clause $k$ is satisfied. If not set the fail bit. Increment the counter.
  • If the value of the clause counter $k$ is less than or equal to $c$, and the trial solution is $111......1$, check whether clause $k$ is satisfied. If it is not, set the fail flag. Increment the counter.
  • If $k$ exceeds $c$, and the trial solution is not $111......1$, check if the fail flag is set. If so then increment the trial solution, reset the counter $k$ to 1, and unset the fail flag. If the fail flag was not set, set the accept flag, set the clause counter $k$ to zero, set the trial solution to zero and halt.
  • If $k$ exceeds $c$, and the trial solution is $111......1$, check if the fail flag is set. If the fail flag was not set, set the accept flag. Set the clause counter $k$ to zero, set the trial solution to zero and halt.

When the algorithm halts, the state of the accept flag tells you whether or not the 3SAT formula can be satisfied.

Now, if I want to compute the state of the algorithm at time $i$ I have an algorithm to compute this in polynomial time with a single call to an NP oracle as follows:

Note that for any $i$, assuming the accept bit has not yet been set, the state of the registers can be calculated in polynomial time, since the value of $k$ and the trial solution register $t$ are simply functions of $i$. Determining if the fail flag is set can be done in polynomial time, simply by checking whether the current state of the trial solution register satisfies all clauses less than or equal the current value of $k$. If and only if this not the case, the fail flag is set. The accept flag is set to zero.

Similarly, if the accept bit has already been set, the state of the registers can be efficiently computed, since everything is zero except the accept bit, which is set.

So the only hardness comes in determining whether the accept bit is set. This is equivalent to determining whether the given 3SAT instance has a solution less than $t$. If it does, then the accept bit must necessarily be set, and if it does not, then the accept bit must necessarily be zero. Clearly this problem is itself NP-complete.

Thus the state of the system at step $i$ can be determined by in polynomial time with a single query to an NP oracle.

Important update: Initially I had misread Ryan's formulation of exact simulation as a decission problem, and so my claim that the decission version was in NP was incorrect. Given the problem of deciding whether bit $j$ at step $i$ on input $x$ as formulated in Ryans answer, then there are 3 possibilities:

  1. This bit is constant in independent of whether $F$ is satisfiable. As there are only two possible states for the system at this time (both of which can be computed in P) determining if this is the case, and if so, whether the value is equal to $\sigma$ is in P.
  2. The bit is equal to $\sigma$ if $F\in SAT$, and unequal otherwise. This problem is clearly in NP, as the satisfying assignment of $F$ acts as a witness.
  3. The bit is equal to $\sigma$ if $F\in UNSAT$ in which case the problem is then in coNP.

Clearly deciding which one of these three is the case can be done in polynomial time, simply by comparing the value that bit takes if $F\in SAT$ and if $F\in UNSAT$. Thus the exact simulation problem is in NP $\cup$ coNP. Thus, as Ryan's lower bound and my upper bound match, assuming both are correct, I think we have an answer: $C_Y = NP\cup coNP$.

Note that there is nothing special about 3SAT in this case, and the same could be done for any problem in NP. Further, the same trick can be used for any non-deterministic complexity class, not just NP, which seem to be the hard ones to simulate. For deterministic classes you can simply run the best algorithm and halt at step $i$. So with this in mind, full simulation of at least some deterministic algorithm for a problem is only as hard as solving the problem itself.

Joe Fitzsimons
  • 13,675
  • 47
  • 91
  • 1
    Can't you use the same technique to show that b) Reaching the same solution as algorithm A is already PSPACE-hard? Have the algorithm choose between one of two possible solutions depending on the solution of a PSPACE-hard problem encoded into the input. – Peter Shor Aug 23 '11 at 13:06
  • 1
    @Peter: Sure, you could make it arbitrarily hard, but I thought the interesting question was the upper bound minimized over A, which turns our to be NP. – Joe Fitzsimons Aug 23 '11 at 13:36
3

Very interesting thought! Here's an extended comment that was too long to post as such:

Regarding the definition in (1) as such, that is:

Start with a complexity class $C$ which is described by some computational task. Given an algorithm A to perform every instance of this computational task, consider a new complexity class $C_A$ which is described by the computational task of completly simulating $A$. Then we can (hopefully) define an "ideal" of complexity classes:

$C^+ = {C_A:$ for all algorithms $A}$.

I believe you're going to quickly run into an issue with undecidability for non-trivial membership in $C^+$. In particular, given the description of two TMs $M$ and $M^{\prime}$, it's well-known that deciding whether they accept the same language (i.e. $L(M) \stackrel{?}{=} L(M^{\prime})$ is undecidable in general by reduction from the Halting Problem.

Further, given the description of a Turing Machine , there is always a trivial simulation: Just construct a new Turing Machine $M^*$ that calls $M$ as a subroutine (using its description), which accepts if $M$ accepts and rejects if $M$ rejects. This only costs a linear overhead. Specifically, if $M$ runs in $t(n)$ computational steps, then $M^*$ runs in $O(t(n))$ (whereby "run," I mean "simulates $M$ exactly").

This suggests to me that if there's any serious traction to be gained here, the precise way you go about making the definition of the ideal of a complexity class is going to be fairly important. In particular, if you intend to show that the ideal of, say, $NP$ is $PSPACE$-hard, you'll have to rule out the notion of a trivial, linear-time simulation of the $NP$ machines in question.

With respect to the homotopy methods described in the talk/paper and GPS's $PSPACE$-completeness result, I believe the "gap" you're witnessing between $PPAD$-completeness and $PSPACE$-hardness is due to the distinction between finding any NE (in the sense of END OF LINE) and finding a unique NE given a specific, initial configuration (in the sense of OTHER END OF LINE).

Daniel Apon
  • 6,001
  • 1
  • 37
  • 53
  • Dear Daniel, thanks for your answer. I am not sure I see why undecidability of membership in C^+ (Or even in C itself) is relevant to the question. The questions if :a) based on all our conjectures and beliefs regarding seperations of complexity classes is it the case that for every algorithm A for 3-SAT the computational task of simulating A (for every instance of 3-SAT) is $\Delta_3^P$ hard. b) Is it possible to actually proves such (or a similar) statement.(Maybe something very basic is wrong with my questions but I am not sure decidability is the issue.) – Gil Kalai Aug 17 '11 at 17:42
  • Gil, see if this speaks to your question at all: Fix some (arbitrary) algorithm A for 3-SAT. Consider a new algorithm B. Then we want to claim: B simulates A, in the sense of your (b) -- i.e. that B reaches the same solutions as A on all well-defined inputs. Since we can view algorithms as Turing machines, we can take the view that A accepts 3-SAT (by the supposition). To prove the claim, it appears to me that we then need to decide the question "Does B (viewed as a TM) accept 3-SAT as well?", which leads into undecidability issues. – Daniel Apon Aug 17 '11 at 17:51
  • I should point out that it's specifically the phrase "for all algorithms A" in the definition of $C^+$ coupled with the notion of saying/guessing (or, presumably, computing or deciding) the elements of the set that gives me pause, if nothing else. Once we start considering computation on properties of arbitrary algorithms or languages or classes/sets of languages, the Halting problem often seems to creep in. – Daniel Apon Aug 17 '11 at 18:28
  • 1
    Dear Daniel, you wrote "Then we want to claim: B simulates A, in the sense of your (b) -- i.e. that B reaches the same solutions as A on all well-defined inputs." This is not what I mean by "B simulates A". For me B simulates A means that it describes precisely the entire "running" of algorithm A not just reach the same "solutions" (or output). – Gil Kalai Aug 18 '11 at 05:53
  • 1
    Regarding your second comment. It looks that we can find a reasonable restriction on the algorithms we consider or formulate the question a little differently: E.g. consider the statement "For every algorithm A to solve 3-SAT it is P-space hard to simulate A." I dont see how the halting problem creep in (more than in other questions in computational complexity). – Gil Kalai Aug 18 '11 at 07:20