22

Many important results in computational complexity theory, and in particular "structural" complexity theory, have the interesting property that they can be understood as fundamentally following (as I see it...) from algorithmic results giving an efficient algorithm or communication protocol for some problem. These include the following:

  • IP = PSPACE follows from a space-efficient recursive algorithm simulating interactive protocols, and an efficient interactive protocol for evaluating totally quantified boolean formulae. In fact any complexity class equality A=B can be seen as following from two efficient algorithms (an algorithm for problems in A which is efficient with respect to B, and vice versa).
  • Proving NP-completeness of some problem is just finding an efficient algorithm to reduce an NP-complete problem to it.
  • The (arguably!) crucial ingredient in the Time Hierarchy Theorem is an efficient universal simulation of Turing machines.
  • The recent result of Ryan Williams that ACC $\not \supset$ NEXP is based on an efficient algorithm to solve circuit satisfiability for ACC circuits.
  • The PCP Theorem is that efficient gap amplification is possible for constraint satisfaction problems.
  • etc., etc.

My question (which is possibly hopelessly vague!) is as follows: Are there any important results in structural complexity theory (as distinct from "meta-results" like the relativisation barrier) which are not known to have a natural interpretation in terms of efficient algorithms (or communication protocols)?

sdcvvc
  • 1,291
  • 9
  • 19
Ashley Montanaro
  • 2,013
  • 18
  • 21
  • 8
    I sort of hope the answer is "no," because I think complexity is really about understanding the power of algorithms!

    I was going to say PARITY not in $AC^0$ almost qualifies, but now I don't think so. You can view the Switching Lemma as a randomized algorithm that lets you swap two rows of a circuit without a big size blow-up (and it can even be derandomized (http://eccc.hpi-web.de/report/2012/116/).

    – Joshua Grochow Oct 25 '12 at 14:53
  • 2
    AshleyMontanaro: Perhaps complexity theory is linked "by definition" to the (time/space) efficiency of the algorithms. As soon as you move away from efficiency you find fundamental results like the undecidability of the halting problem but you're no more in the "complexity" domain. However trying to give a partial answer I think that the logic characterization of complexity classes is an important result that gives a different perspective not (directly) tied to "algorithms". – Marzio De Biasi Oct 25 '12 at 15:23
  • 3
    In particular, I would have listed the descriptiive characterization of NP in terms of existential second order logic. This is purely about expressive power and is not primarily about algorithms. However, Courcelle's theorem suggests that this distinction is not real. – Suresh Venkat Oct 25 '12 at 16:25
  • 3
    Would you say that the Razborov-Smolensky proof of PARITY not in AC0 contains an algorithmic result at its core? And what about query complexity lower bounds, like the one that says a quantum computer cannot solve the unordered search problem in $o(\sqrt{n})$ queries? – Robin Kothari Oct 25 '12 at 19:28
  • Yes, the Razborov-Smolensky proof seems like a nice counterexample. Query and communication complexity lower bounds are also problematic (which is why I tried to cheat by using the word "structural" :). – Ashley Montanaro Oct 26 '12 at 08:53
  • 2
    see also http://cstheory.stackexchange.com/questions/3229/proving-lower-bounds-by-proving-upper-bounds – sdcvvc Oct 26 '12 at 12:55
  • One can also think of Razborov-Smolensky proof and generally any similar result as algorithmic (in the context of Razborov-Rudich natural proofs). – Kaveh Oct 27 '12 at 05:53
  • I would not include NP-Completeness; here is my perspective on this. The idea of relating the difficulty of problems to each other via reductions and proving that a problem is one of the hardest problems in its complexity class through completeness are just IMHO powerful and successful surrogates for our difficulty of proving lower bounds. Think about the impact, if only we were able, for instance, of proving a super polynomial lower bound for a NP-Complete problem. – Massimo Cafaro Oct 27 '12 at 09:07

1 Answers1

20

For many lower bounds in algebraic complexity, I do not know of a natural interpretation in terms of efficient algorithms. For example:

  • the partial derivatives technique of Nisan and Wigderson

  • the rank-of-Hessian technique of Mignon and Ressayre (giving the currently best known lower bound on permanent versus determinant)

  • the degree bound of Strassen (and Baur-Strassen)

  • the connected components technique of Ben-Or.

In all the above results, they really seem to be using a property of the functions involved, where that property itself seems unrelated to the existence of any particular algorithm (let alone an effective one).

For non-algebraic results, here are a couple of thoughts:

  • The standard counting argument for the $n \log n$ sorting lower bound does not appear to have an interpretation in terms of efficient algorithms. However, there is an adversarial version of this lower bound [1], in which there is an algorithm which, given any decision tree that uses too few comparisons, efficiently constructs a list that the decision tree sorts incorrectly. But the adversarial version, while not difficult, is significantly more difficult than the counting proof. (Note that this is quite a bit stronger than what one gets by applying the adversary lower bound technique, e.g. as in these notes, since in [1] the adversary itself is efficient.)

  • I think I've changed my mind about PARITY not in $AC^0$ (even the original proof, let alone the Razborov-Smolensky proof, as pointed out by @RobinKothari). Although the Switching Lemma can be viewed as a randomized (or deterministic) algorithm that lets you swap two rows of a circuit without a big blow-up in size, I think this really has a different flavor than many results in complexity, and specifically the ones you cite. For example, Williams's proof that $ACC \neq NEXP$ is based crucially on the existence of a good algorithm for a particular problem. In contrast, if one could prove something like the Switching Lemma nonconstructively, it would be just as good for proving PARITY not in $AC^0$.

Because of these last two examples - especially sorting, where the standard proof is nonconstructive - it seems to me that the question may not just be about natural interpretations in terms of efficient algorithms, but also somehow about the constructiveness / effectiveness of the proofs of various complexity results (depending on what the OP had in mind). That is, the standard sorting lower bound is not constructive or algorithmic, but there is a constructive, algorithmic proof of the same result.

[1] Atallah, M. J. and Kosaraju, S. R. An adversary-based lower bound for sorting. Inform. Proc. Lett. 13(2):55-57, 1981.

Joshua Grochow
  • 37,260
  • 4
  • 129
  • 228