17

I am looking for one (or more) examples of a parametric class of algorithms $P_t$ for approximately solving a class $\cal A$ of algorithmic questions with the following properties:

1) Solving the problems in $\cal A$ exactly or even approximately is computationally quite hard as a function of the input size $n$. (What I have especially in mind is that it is P-complete, or as hard as solving linear equations, etc. but examples where approximating $\cal A$ is in another CC class are also welcome.)

2) For a fixed $t$ the algorithms in $P_t$ are computationally easy. (Here I am thinking about bounded depth circuits of various kind, maybe also $NC^1$ or $NC$.)

3) The relation between $t$ and $n$ is such that $P_t$ is a practically good algorithm (at least in some interesting regime for the input) for approximating solutions for problems in $\cal A$. Perhaps even coming close to our best available algorithms for approximating algorithm for the class $\cal A$ in some interesting regime.

So another way to put it: Are there practically good algorithms for approximately solving linear equations, computing determinants, linear programming, etc which are asymptotically in a very low computational complexity class - say bounded-depth computation?

The algorithmic question can be a decision problem, an optimization problem, a sampling problem, or a problem of some other kind. (My motivating example came from a sampling problem.)

Gil Kalai
  • 6,033
  • 35
  • 73
  • Sounds similar to analogues of the complexity class APX and of the idea of a PTAS, but at a "lower" level, e.g. "PTAS is to NP-hard as [your question] is to P-hard". Is that accurate? – usul Nov 03 '14 at 19:10
  • It is similar but the crucial difference is that I am interested in cases where approximation is also asymptotically hard and that the "approximation scheme" manifests a practical advantage in some regime. We could ask my question where the algorithmic task is NP-hard: there we want a low complexity level approximation scheme which is practically good (compared to other P-algorithms) in some interesting regime of inputs. – Gil Kalai Nov 04 '14 at 05:49
  • Probably you wanted to write AC0 and not NC0. – domotorp Nov 09 '14 at 12:05
  • I wanted to write NC^1 – Gil Kalai Nov 09 '14 at 13:00
  • 2
    Recent analysis of the simplex method for linear programming has shown that, though exponential in the worst case, it is actually polylogarithmic(!) if you make small random perturbations of the constraints. Would this fit what you are looking for? See, for example, http://www.math.cmu.edu/~af1p/Teaching/ATIRS/Papers/SMOOTH/simplex.pdf – Nick Alger Nov 10 '14 at 08:41
  • offhand/ left field idea, possibly some connection, there is some newer research motivated by noise/ errors in hardware that attempts to show that even noisy circuits compute approximately correct answers... it has not been tied in a lot with complexity theory yet... – vzn Nov 10 '14 at 16:33
  • Dear Nick, this is of some relevance but there are some big differences. 1) the results you mention are asymptotic, they indicate that certain variant of an algorithm works better asymptotically on smoothed cases than on worst case. I am looking for cases where asymptotically the algorithm perform bad but for certain interesting regime of inputs its is competitive or better. 2) the complexity of the simplex algorithm is governed by the need to solve linear equations so even if the number of pivot steps is small it is not close to bounded depth computing or the other low classes I ask about. – Gil Kalai Nov 12 '14 at 10:09
  • 2
    Here's an optimization example... maybe. MAX-LIN: given a large (inconsistent) system of linear equations over GF(2), find a solution that maximizes the number of equations satisfied. This problem cannot be approximated with a factor larger than 1/2 unless P=NP, hence even approximately solving is very difficult. Nevertheless, it seems a 1/2-approximation (which is apparently the best possible) can be computed in TC0: a random assignment works, so we can choose poly(n) assignments (hard-coded) and evaluate the linear system on each one, outputting the best. Maybe I'm missing your point. – Ryan Williams Nov 15 '14 at 07:17
  • Dear Ryan, this is a nice example and it is relevant to the question. A (hypothetical) variation which will be precisely to my point could be this: An algorithm for solving MAX-LIN to a factor of 0.6 which is of low level complexity (TC0 AC0...) and yet perform practically better than other P-algorithms. So (unlike the 0.5 case) I am asking for a problem which is asymptotically pretty hard, and yet practically for some interesting values of the input the best way to solve it is via a very low-level algorithm. (Or an analog device that performs a low-level algorithm.) – Gil Kalai Nov 15 '14 at 15:48
  • 1
    This may or may not be relevant: http://arxiv.org/abs/1412.2470 shows how to compute the determinant of a bounded tree width matrix in Logspace by crucially using the version of Courcelle's theorem in Elberfeld et al. (http://eccc.hpi-web.de/report/2010/062/). I suspect that this can be made into #NC^1 by giving a strong representation of the bounded tree width matrix and even TC^0 if the matrix is bounded tree depth matrix and using the version of Courcelle's theorem from http://eccc.hpi-web.de/report/2011/128/ – SamiD Dec 12 '14 at 19:42
  • What about Weisfeiler-Leman for Graph Isomorphism? Here "approximation" would not be in the sense of approximating an optimization problem but rather of "only gets the right answer on a subset of instances". Const-dim WL can be done in TC^0, but even vertex refinement (the 2D case) already gets random cases in expected linear time, and works well in practice (it's the core of practical algorithms). Not sure if this kind of "approximation" is what you had in mind though. – Joshua Grochow Feb 12 '24 at 16:41

0 Answers0