4

I'm trying to solve a "simple" (= small) optimization problem often, with only minor changes to the objective function. Therefore it's important to keep the "time per solve" as low as possible. This problem is basically a QP with $Q := \operatorname{diag}(\alpha, \dots, \alpha)$ positive definite, and linear constraints, so of the following form:

$$ \begin{align} \min&\quad c'x + \alpha\cdot x'x \\ \text{s.t.}&\quad Ax \geq 0 \\&\quad x \geq 0 \end{align} $$

with decision variable $x \in \mathbb{R}^n$, as well as fixed $c \in \mathbb{R}^n$ and $A \in \mathbb{R}^{m\times n}$ and $\alpha \in \mathbb{R}_+$.

Is there any way to pre-calculate a "somewhat" analytical solution that is (as) fast (as possible) to re-solve with changing values for $c$ and $\alpha$?

Remarks: $n$ and $m$ are fairly small, and memory as well as pre-calculation time are (mostly) not relevant.


To clarify what I mean with "somewhat": If the term $\alpha\cdot x'x$ did not exist, one possible way would be to precompute the polytope defined by $Ax \geq 0$ and extract its vertices ($y_i$). The optimal solution $x^*$ must lie on one of those. Therefore we can calculate $c'y_i$ and select $x^* := y_j$ for one $j$ that satisfies $y_j \leq y_i \forall i$. This essentially boils down to one matrix-vector multiplication and one argmax, which is really fast.

Initial thought: since the set of feasible solutions is bounded, and $Q$ is positive definite, a minimum exists and can be found by looking at the KKT points (that will also be a global minimum since the objective function is convex). Finding those KKT points can be done by solving a bunch of equations (but with variable $c$ and $\alpha$, which will prevent really pre-calculating everything...).

kchnkrml
  • 41
  • 2
  • Are you expecting the changes to the objective coefficients to be "small", meaning there is reasonable hope that the next optimum might be "near" the previous one, or will the objective changes from one iteration to the next be whimsical? – prubin Sep 25 '22 at 18:09
  • @prubin I cannot guarantee that the changes are small and/or the optimum of the next iteration will be "close", but nevertheless this will probably be the case for a lot of iterations (especially for later ones). A "warm-start" is therefore possible. Or are you suggesting some sort of Taylor approximation around the last known solution? – kchnkrml Sep 25 '22 at 18:36
  • Any chance you can get $A$ to be all positive or change it to $Ax=0$? Typically the analytic solutions require equality constraints. – Brannon Sep 26 '22 at 14:59
  • I was in fact thinking of the possibility of a warm start (using an interior point algorithm) ... but I don't know if any interior point solvers have warm start options. Some sort of gradient projection method might also work somewhat well as a heuristic. – prubin Sep 26 '22 at 16:10
  • @Brannon $A$ all positive is unfortunately not possibly. $Ax = 0$ would be no problem if I'd be able to keep the non-negativity constraints (then I could just insert slacks) - but that's probably not what you were thingking!? :( – kchnkrml Sep 26 '22 at 20:33
  • @prubin I fear like that would still be "comparetively" slow (as in: compared to just solving it with a commercial interior point solver, the speed-up won't be that high)? I'm looking for something in the range of < 1us to solve with $m,n \approx 10$. – kchnkrml Sep 26 '22 at 20:58
  • You could be right about the relative speed up of a warm start (it's an empirical question). Good luck finding any exact method that will hit your target speed range. Are you interested in heuristics, or do you need a true optimum? – prubin Sep 26 '22 at 21:06
  • @prubin A heuristic could be viable since there is the possibility of only doing an exact solve every $n$ iterations (currently I'm solving these with an IP solver with high tolerances, therefore not exact either). Is there any specific approach you have in mind? (I've never really delt with any heuristic approaches so I'm kind of lost there) – kchnkrml Sep 26 '22 at 21:31
  • As I mentioned, one possibility would be to do a few iterations of some version of gradient descent (maybe using a penalty or barrier function), starting from the previous solution, and hope for the best. – prubin Sep 26 '22 at 22:06

0 Answers0