5

I have a quadratic program that looks like this:

For fixed vector $b$, and matrices $A_1, ..., A_n$, Find column vectors $x_1, ..., x_n$ that minimize $\sum_{i=1}^n x_i ^T 1 1^T x_i$ subject to $\sum_{i=1}^n A_i x_i = b$ and $x_i\in \mathcal{X_i}$, where $\mathcal{X}_i$ is a convex set.

I've been able to solve the problem using two ways. The first is using proximal gradient methods, and the second is using ADMM (both first order methods), but both have a very slow convergence and I have to choose a "correct" learning rate.

I was wondering if there is a way to use second order methods on this problem or a practical method for speeding up the convergence.

AspiringMat
  • 151
  • 2

3 Answers3

4

Depending on the convex sets it maybe possible to pose the problem as a conic optimization problem. In the Mosek modeling cookbook you can see what you can formulated on conic form. The resulting conic optimization problem can be solved using an interior-point optimizer e.g. Mosek. This approach exploits the convexity.

ErlingMOSEK
  • 3,166
  • 10
  • 21
2

I see multiple approaches.

  • An interior point NLP solver like Ipopt
  • An sparse SQP solver like WORH
  • Variable elimination, for every equality constraint you can eliminate one variable, then you should be left with an unconstrained problem
worldsmithhelper
  • 4,007
  • 6
  • 21
  • I like the IP solver approach, but I suspect a general purpose solver like Ipopt will have worse performance than one specific for convex problems (cvxpy, for example, will correctly exploit the fact that the cone here is a product of cones before passing to various solvers). Another alternative is OSQP, which uses ADMM. – Robert Bassett May 03 '22 at 18:58
2

Don't use an optimization solver. Use a linear system solver.

The Lagragian of your problem is $$L(x; \lambda) = \sum x_{i}^{T} \mathbf{1} \mathbf{1}^{T} x_{i} - \lambda^{T}(\sum A_{i} x_{i} - b)$$

With respect to the primal variable you have stationarity when $$2 \mathbf{1} \mathbf{1}^{T} \left(\sum x_{i}\right) - (\sum A_{i}^{T}) \lambda = 0$$ Combine this with your constraint $\sum A_{i} x_{i} = b$ to get the following linear system

$$\left(\begin{array}{cccc} 2 \mathbf{1} \mathbf{1}^{T} & \cdots & 2 \mathbf{1}\mathbf{1}^{T} & -\sum A_{i}^{T}\\ A_{1} & \cdots & A_{n} & \mathbf{0} \end{array}\right) \left(\begin{array}{c} x_{1} \\ \vdots \\ x_{n} \\ \lambda \end{array}\right) = \left(\begin{array}{c} 0\\ b\end{array} \right).$$

Solve it using your language of choice to get your solution.

Robert Bassett
  • 625
  • 7
  • 12