6

(For people who don't know what RLT is): I am maximizing an indefinite quadratic function over a standard simplex, i.e., the standard quadratic optimization problem. A well-known approach is to relax the problem by convexifying the objective function, i.e., instead of $\max \sum_{i} \sum_{j} P_{ij} x_i x_j$ for some symmetric $P$ we maximize the linear function $\max \sum_{i} \sum_{j} P_{ij} X_{ij}$ for some symmetric matrix variable $X$ (assumption is $X = xx^\top$, although we cannot constrain this, so the relaxation follows as we are not imposing such strict equality). We also take each constraint $a_i^\top x - b_i \leq 0$, multiply with each other, add the constraint $(a_i^\top x - b_i) (a_j^\top x - b_j) \geq 0 $ and replace every $x_i x_j$ term with $X_{ij}$. This is the RLT relaxation. If we also add the SDP constraint $X - xx \succeq 0$, this tightens the relaxation. Let's call this method the RLT/SDP approximation.

(Question for the people who already know RLT): I made numerical experiments, where I generate a symmetric matrix $P$ randomly, and then apply RLT/SDP relaxation. The results are crazy-good, almost all of the time I get globally optimum upper bounds (and when I plug the optimized $x$, it also gives amazing lower bounds). I am now thinking, every beautiful thing has an ugly truth as well. Can you please let me know an adversarial way to design a $P$, such that the RLT/SDP relaxation is not that good?

(Matlab Code Below -- uses YALMIP, CPLEX, and MOSEK):

P = rand(20); P = P - tril(P,-1) + triu(P,1)'; %generate random symmetric matrix (change 20 to any number)
n = size(P,1);
%CPLEX GLOBAL OPTIM BELOW
x = sdpvar(n,1);
obj = x'*P*x;
constraints = [x >= 0, sum(x) == 1];
sol = optimize(constraints, -obj, sdpsettings('solver', 'cplex', 'cplex.optimalitytarget',3));
value(obj); %global optimum value
%RLT RELAXATION BELOW!
X = sdpvar(n); %matrix variable
x = sdpvar(n,1); %original variable
obj2 = P(:)'*X(:); %RLT objective
constraints2 = [x>=0, sum(x) == 1, sum(X,2) == x, X(:) >= 0, [X, x; x', 1] >= 0]; %RLT/SDP system
sol2 = optimize(constraints2, -obj2, sdpsettings('solver', 'mosek')); %optimize
value(obj2) %wow!
independentvariable
  • 3,980
  • 10
  • 36
  • 3
    I don't have a mathematical argument for it, but computational experience is that randomly-generated instances are often easy to solve, in most contexts. In your case, I would try with instances from QPLIB. – mtanneau Aug 27 '20 at 20:49
  • Nice, I agree! But these files don't store the matrix $P$ and it will be hard for me to reproduce these examples. – independentvariable Aug 27 '20 at 21:16
  • 1
    You can download them in .qplib fromat, which has the Q matrix in COO format. Shouldn't be too hard to parse. Or in .lp format, which you can feed to CPLEX directly. In fact, instance QPLIB_0018 has exactly the form you are considering. – mtanneau Aug 27 '20 at 22:08
  • Yes, I parsed, thank you! I am trying now. – independentvariable Aug 27 '20 at 23:19
  • Well, RLT/SDP is still always globally optimal. I need adversarially designed cases. – independentvariable Aug 27 '20 at 23:41
  • 1
    Try replacing sum(x) ==1 with 1 >= x. You'll see things change big time. – Mark L. Stone Aug 28 '20 at 00:37
  • And I mean BIIIIIIIIIIG time. – Mark L. Stone Aug 28 '20 at 00:46
  • Do you mean sum(x)<=1 as a constraint? – independentvariable Aug 28 '20 at 06:48
  • No, I mean what I wrote. constraints = [1 >= x >= 0]; Try it, BTW, there is nothing special about the choice of "1"; it just makes the problem bounded in the absence of the sum(x) == 1 constraint. – Mark L. Stone Aug 28 '20 at 13:27
  • Thanks so much for this suggestion. But I think by adding auxiliary redundant constraints we can do worse and worse. I am looking for a theoretical answer — more like a structure for the indefinite quadratic matrix such that the rlt sdp approximatikn is weak. – independentvariable Aug 28 '20 at 13:47
  • 1
    I made a modification to your example which makes the SDP very non-optimal. I leave the theoretical explanation, if any, to someone else.I played around some with alternate P matrices, but that didn't seem to do much without modifying the constraints. – Mark L. Stone Aug 29 '20 at 03:24
  • @MarkL.Stone Thanks so much for trying this :) I observe that if we add a concave term to the objective function, then this may worsen the quality of relaxation as the second term will not have the matrix variable in it. – independentvariable Aug 30 '20 at 22:24

0 Answers0