0

Let $G=(V,E,w)$ be a graph and for edge $e\in E$, there is associated weight $w_e$. The max-k-cut wants to partition V into k subsets $P_1,\cdots,P_k$ and maximize $\sum_{1\leq r<s\leq k}\sum_{i\in P_r,j\in P_s}w_{ij}$.

If $k=2$, or we assume non-negative weights, there is $O(1)$-approximation algorithms. I am wondering if we can get $O(1)$ or logarithmic approximation for general $k$ and possibly negative weights.

Besides, I am more interested in the case where the objective is to maximize $|\sum_{1\leq r<s\leq k}\sum_{i\in P_r,j\in P_s}w_{ij}|$.

  • See Approximating the Cut-Norm via Grothendieck’s Inequality'' for k=2, andImproved approximation algorithms for MAX k-CUT and MAX BISECTION'' for general k with non-negative weights. – Daogao Liu Aug 03 '21 at 07:44
  • Hasn't this question already been asked and answered, in the negative, for $k=2$? See https://cstheory.stackexchange.com/questions/2312/max-cut-with-negative-weight-edges?rq=1 – Neal Young Aug 03 '21 at 13:52
  • Yes, it has been answered for k=2 – Daogao Liu Aug 03 '21 at 14:56
  • But isn't the answer there (by Peter Shor) that it's not possible for $k=2$? So it's also not possible for general $k$? So the answer to your question is no? – Neal Young Aug 03 '21 at 16:00
  • The answer depends on whether we maximize the sum or its absolute value. In the former case, we cannot even determine the sign of the optimal value in polynomial time; in the later case, we can get a constant factor approximation. – Yury Aug 03 '21 at 21:28
  • @Yury, what's the idea behind the constant-factor approximation for maximizing the absolute value of the cut weight when $k=2$? – Neal Young Aug 03 '21 at 21:59

1 Answers1

3

There is no approximation algorithm for the problem of maximizing $\sum_{(i,j)\text{ is cut}} w_{ij}$, since it's even NP-hard to determine whether the optimal value is positive. However, there is a constant-factor approximation algorithm for the problem of maximizing $\left|\sum_{(i,j)\text{ is cut}} w_{ij}\right|$. I will briefly describe how this algorithm works.

I. For a fixed instance of the problem, let $OPT_k$ be the optimal value when we partition $V$ into $k$ parts $P_1, \dots, P_k$. We assume that some parts may be empty. We now show that $$OPT_k/2 \leq OPT_2 \leq OPT_k.$$

  • Let $(P_1, P_2)$ be an optimal solution with 2 parts. Pad this partition with $k-2$ empty parts. The value of the obtained solution $(P_1, P_2, \varnothing, \dots, \varnothing)$ is $OPT_2$. Thus, $OPT_k \geq OPT_2$.
  • Now consider an optimal solution $(P_1, \dots, P_k)$ for $k$ parts. Randomly divide all clusters in two groups. Let $P_1'$ be the union of parts in the first group; $P_2'$ be the union of groups in the second. It's easy to see that $${\mathbb E}\left[\sum_{(i,j)\text{ is cut by }(P_1', P_2')} w_{ij}\right] = \frac12 \sum_{(i,j)\text{ is cut by }(P_1,\dots, P_k)} w_{ij}.$$ Therefore, $${\mathbb E}\left[\left|\sum_{(i,j)\text{ is cut by }(P_1', P_2')} w_{ij}\right|\right] \geq \left|\frac12 \sum_{(i,j)\text{ is cut by }(P_1,\dots, P_k)} w_{ij}\right|.$$ We conclude that $OPT_k \geq OPT_2/2$.

Thus, it is sufficient to design an approximation algorithm for $OPT_2$.

II. Recall that the Grothendieck inequality states that $$\max_{x,y\in\{-1,1\}^n} \sum_{i,j}w_{ij} x_i y_j \geq \frac{1}{K} \max_{\text{unit vectors }u_1,\dots,u_n,v_1,\dots, v_n} \sum_{i,j}w_{ij} \langle u_i, v_j\rangle$$ where $K$ is an absolute constant. Using semidefinite programming and the Grothendieck inequality, we can get a constant factor approximation for the following Quadratic Programming problem [Alon and Naor 2004].

$$\max_{x,y\in\{-1,1\}^n} \sum_{i,j}w_{ij} x_i y_j$$

The same algorithm gives a constant factor approximation for $\max_{z\in\{-1,1\}^n} \left|\sum_{i,j}w_{ij} z_i z_j\right|$ (we first find $x$ and $y$ and then output the better of the two solutions $z=x$ and $z=y$).

III. Let $W = \sum_{i<j}w_{ij}$ and $$QP= \max_{z\in\{-1,1\}^n} \left|\sum_{i<j}w_{ij} z_i z_j\right|=\max_{z\in\{-1,1\}^n} \left|W -2\sum_{\substack{i<j\\z_iz_j=-1}}w_{ij}\right|.$$ Note that there is one-to-one correspondence between solutions to the QP and the partitioning problem; namely, a solution $z_1,\dots,z_n$ defines the following partition $P_1 = \{i:z_i=1\}$ and $P_2=\{i:z_i = -1\}$.

If solution $(P_1, P_2)$ has value $x$ then the value of the corresponding QP solution is either $|W-2x|=|2x-W|$ or $\left|2x + W\right|$. It follows that $|OPT_2- 2QP| \leq |W|$.

IV. Now we solve the QP approximately, obtain a solution $z_1',\dots, z_n'$, and output the better of the following two solution:

  • partition $(P_1', P_2')$ defined by the QP solution $z_1', \dots, z_n'$; the value of this solution is $\Theta(OPT_2) - O(|W|)$.
  • a random partition; the expected value of this solution is at least $|W|/2$.

It's easy to see that this algorithm gives a constant factor approximation for the partitioning problem.

Yury
  • 3,899
  • 23
  • 29