8

Let a weighted graph $G(V,E)$, where the weights are real (positve and negative). Assume that $G$ has $\mathcal{O}(n\log n)$ edges.

How fast can we compute MAX-CUT on this graph? Can we compute (approximate within $\epsilon$) MAXCUT faster on sparse graphs compared to dense ones?

Dimitris
  • 1,356
  • 6
  • 16
  • 3
    actually it's DENSE graphs on which MAX CUT is easy (i.e a PTAS exists). See for example http://www.cs.brown.edu/~ws/papers/maxcut.pdf – Suresh Venkat Dec 07 '11 at 05:22
  • 3
    That's an interesting pointer. However, from a very fast diagonal parse on the paper, it seems that they don't consider weights; the authors say that "Previous algorithms for MaxCut have been extended to weighted instances when the weights define a metric. We conjecture that such instances can also be solved by an extension of our greedy algorithms." – Dimitris Dec 07 '11 at 05:38
  • 2
    I see. there's older work on certain kinds of dense weighted MAX CUT: www.icsi.berkeley.edu/cgi-bin/pubs/publication.pl?ID=1121 – Suresh Venkat Dec 07 '11 at 05:48

3 Answers3

8

There cannot be significantly better algorithms for graphs with $O(n\log n)$ edges than there are for general graphs. This is because the Benczur-Karger cut sparsifier can take an arbitrary graph, and output a graph with only $O(\frac{n\log n}{\epsilon})$ edges such that the value of any cut in the sparsified graph is within a $(1\pm\epsilon)$ factor of the cut value in the original graph. Therefore, a PTAS for sparse graphs would yield a PTAS for general graphs, by simply first running the cut sparsifier, and then approximating max-cut in the sparse graph. Since it is NP hard to approximate max cut in general graphs to within a factor of 16/17, this is also true for sparse graphs.

Note that there is no PTAS for max-cut even when all edge weights are non-negative. Things are even worse when negative edge weights are allowed, as in your question. See this question for the negative edge weight case: Max-cut with negative weight edges

Aaron Roth
  • 9,870
  • 1
  • 43
  • 68
3

Unless $\epsilon$ is very large, it seems that no faster algorithms are known than the exact ones. For sparse graphs, the fastest one known seems to be one with running time $O(n+m) \min\{ 2^{m/5}, 2^{(m-n)/2}\}$ for a graph with $n$ vertices and $m$ edges:

Alexander D. Scott and Gregory B. Sorkin. Faster Algorithms for MAX CUT and MAX CSP, with Polynomial Expected Time for Sparse Instances. In Proc. APPROX/RANDOM 2003, LNCS 2764, pp.382--395, Springer, 2003. Link

The title refers to the fact that if the graph is a random graph (with edge probability $c/n$ for some $c \le 1$) then there is a polynomial expected time algorithm.

Vincenzo
  • 751
  • 6
  • 11
0

As far as the approximation portion of your question goes... You can compute MAX-CUT with a 2 approximation in time $O(E)$ by using a simple randomized algorithm. For each edge in the graph flip a coin to determine which set to place it in. Output the larger of these two sets. In the worst case OPT will be E, and our randomized algorithm will be $|E|/2$. This is simple to see since each edge has 50% chance of being output in the cut.

Nicholas Mancuso
  • 746
  • 7
  • 15
  • 3
    I guess the questioner would like to know whether there is a better approximation than Goemans–Williamson (.878...) when the graph has O(n log n) edges. – Mahdi Cheraghchi Dec 07 '11 at 04:44
  • 1
    thanks for the answer. As MCH mentions, I'm mainly interested in an $\epsilon$ approximation. – Dimitris Dec 07 '11 at 05:33
  • 2
    I notice that the questioner is interested in graphs that have negative edge weights -- note that this simple algorithm does not work with negative edge weights. – Aaron Roth Dec 08 '11 at 03:15
  • 1
    (And neither does the Goemans-Williamson algorithm...) – Aaron Roth Dec 08 '11 at 03:35