22

We know that linear programs (LP) can be solved exactly in polynomial time using the ellipsoid method or an interior point method like Karmarkar's algorithm. Some LPs with super-polynomial (exponential) number of variables/constraints can also be solved in polynomial time, provided we can design a polynomial time separation oracle for them.

What about semidefinite programs (SDP)? What classes of SDPs can be solved exactly in polynomial time? When an SDP can't be solved exactly, can we always design an FPTAS/PTAS for solving it? What are the technical conditions under which this can be done? Can we solve an SDP with exponential number of variables/constraints in polynomial time, if we can design a polynomial time separation oracle for it?

Can we solve the SDPs that occur in combinatorial optimization problems (MAX-CUT, graph coloring) efficiently? If we can solve only within a $1+\epsilon$ factor, will it not have an effect on constant factor approximation algorithms (like 0.878 for Goemans-Williamson MAX-CUT algorithm)?

Any good reference on this will be highly appreciated.

Arindam Pal
  • 1,591
  • 12
  • 23
  • 3
    In fact the method works for convex programming in general – Suresh Venkat Nov 30 '12 at 07:07
  • There are even libraries for SDP solving available. – adrianN Nov 30 '12 at 11:38
  • 10
    There are at least two reasons you can't solve a general SDP in polynomial time. (1) There exist SDPs whose solution is exponential sized. (2) SDPs can encode the sum of square roots problem, which is not known to be polynomial time solvable. – Robin Kothari Nov 30 '12 at 15:36
  • 2
    @RobinKothari For SDPs, usually "solvable in polynomial time" is replaced by "gets within $\epsilon$ (additive) of OPT in time polynomial in $1/\epsilon$" IIRC. p.s how does an SDP encode sum-of-square-roots ? – Suresh Venkat Nov 30 '12 at 17:15
  • 8
    @SureshVenkat: Say we have a 2x2 matrix with entries [a b; c d]. Impose that this is positive semidefinite and d = 1. This means b = c and a >= b^2. So b is upper bounded by the square root of a. Now we can maximize the sum of several such b's. The optimum value will be the sum of square roots of the respective a's. – Robin Kothari Nov 30 '12 at 18:40
  • If we can solve only within a $1+\epsilon$ factor, will it not have an effect on constant factor approximation algorithms (like 0.878 for Goemans-Williamson MAX-CUT algorithm)? – Arindam Pal Dec 01 '12 at 02:57
  • 2
    It's not multiplicative but additive. Also, http://en.wikipedia.org/wiki/Semidefinite_programming#Algorithms – Suresh Venkat Dec 01 '12 at 03:44
  • Does the separation oracle approach work with interior point methods, or is this possible only for the ellipsoid algorithm? – G. Bach Dec 10 '15 at 14:56
  • Question:Was wondering when saying polynomial time algorithm solving SDP, are the algorithms solving for the optimal solution, exactly or approximately? Answer:It is totally depend on your problem. If your SDP problem is convex, then the solution is exactly for your current problem. – Tong Wai Nov 12 '19 at 02:15

1 Answers1

16

The ellipsoid method and interior point methods can be extended to solve SDPs as well. You can refer to any standard-texts on SDPs for details. Here's one:

Jagadish
  • 1,955
  • 21
  • 27
  • 1
    Nice reference too! Thanks! Was wondering when saying polynomial time algorithm solving SDP, are the algorithms solving for the optimal solution, exactly or approximately? – Tim Mar 15 '13 at 21:40