2

Consider the problem \begin{align*} &\min f(x)\\ & \ \text{s.t.} \ \ \ Ax = b \end{align*}

In this expository paper, Boyd claims (top of page $8$) that if:

  1. $\lambda^*$ is a dual optimal solution
  2. There is no duality gap
  3. $L(x, \lambda^*)$ has a unique minimizer $x^*$ (which can happen, for example, if $f$ is strongly convex)

then $x^*$ is primal optimial.

I've played around with the Lagrangian for some time, but I have not been able to show that $x^*$ is primal feasible, nor that $f(x^*)$ is the primal optimal value.

I am also wondering: if $L(x, \lambda^*)$ does not have a unique minimizer, must a primal optimal solution $x^*$ be among them?

Thank you in advance for any help.

user56202
  • 233
  • 1
  • 6
  • Hint: "no duality gap", i.e., the optimal values of the primal and dual problems are the same. is a crucial condition for the correctness of the statements you question. – Mark L. Stone May 18 '21 at 19:34
  • Related question: link(https://or.stackexchange.com/questions/1065/recovering-primal-optimal-solutions-from-dual-sub-gradient-ascent-using-ergodic) – batwing May 19 '21 at 13:59
  • @MarkL.Stone Thanks for encouraging me to solve this on my own. Does strong duality usually mean $\tag{}$ A) There is no duality gap $\tag{}$ B) There is no duality gap and the primal, dual values are attained $\tag{}$ In the second case, it's not too difficult. Let $\overline{x}$ be a primal optimal solution and $p^, d^$ the primal/dual optimal values. Then $$L(\overline{x}, \lambda^) = f(\overline{x}) = p^* = d^* = \inf_x L(x, \lambda^),$$ so $\overline{x}$ is a minimizer of $L(x, \lambda^)$, and by uniqueness we must have $\overline{x} = x^*.$ – user56202 May 19 '21 at 14:07
  • @batwing Thank you I will check it out. – user56202 May 19 '21 at 14:08
  • Re: strong duality. I have heard the term "strong duality" used in the larger setting of (A). That is, some people would say that strong duality holds as long as the primal optimum equals the dual optimum, even if neither is attained. For more background on when the latter occurs, I can recommend the Mosek modelling cookbook: https://docs.mosek.com/modeling-cookbook/duality.html#weak-and-strong-duality – mtanneau May 21 '21 at 20:18
  • @mtanneau Thanks for your comment. In your experience, do most people use A or B? Also, if you did read the document, do you think the author is using B? – user56202 May 21 '21 at 20:26
  • In their textbook, Boyd and Vandenberghe define strong duality as (A) "no duality gap." Have you worked out what might happen in the case where the primal optimal value is not attained? – Brian Borchers May 23 '21 at 18:24
  • @BrianBorchers I have tried, but I was not able to prove that $x^*$ is primal optimal. – user56202 May 24 '21 at 14:16
  • Do you even know that $x^{*}$ exists in the case that the primal optimal value is not attained? – Brian Borchers May 24 '21 at 15:29
  • @BrianBorchers My intuition is that $x^$ doesn't $\textit{have to }$ exist if the primal optimal value is not attained (or even if it is attained). But regardless, the claim is "$\textbf{if}$ $x^$ exists, ...". $\tag*{}$ Could it be that since the primal problem is formulated using $\min$ instead of $\inf$, the authors are implicitly telling us that the primal optimal value is attained? – user56202 May 24 '21 at 17:22

1 Answers1

2

A proof that $x^*$ is optimal can be achieved by a Taylor expansion around $x^*$ at optimality. Observe that $$f(x^*+p) = f(x^*) + p^\top f'(x^*) + \frac12p^\top f''(x^*)p$$

Note that at a local optimum $\nabla f=0$ and given that $f$ is strictly convex it is a PD matrix with positive eigenvalues. Thus the third argument involving the Hessian $>0$ thereby implies that $f(x^*+p) > f(x^*)$ and thus proves that $x^*$ is a global minimum on $f$.

As to your question on how to prove $x^*$ is feasible I think you can prove that by contradiction assuming $A(x^*) \ne b$ and then the KKT conditions under that assumption will not be satisfied.

mathcomp guy
  • 275
  • 1
  • 6
  • Thank you for your answer. I have not read it in detail yet, but $f$ being differentiable is not part of the assumptions. – user56202 May 21 '21 at 16:15
  • If $f()$ is an MIP then the proof still holds by the LP relaxation on $f()$. If the polyhedron of the LP relaxation containing the convex hull of the IP is not feasible then the MIP convex hull is the empty set and is itself not feasible. In general you are guaranteed to achieve optimality on an MIP through branch and bound which is guaranteed to find the global optimum solution to an MIP. – mathcomp guy May 21 '21 at 16:20
  • @user56202 Differentiability is indeed one of the assumptions, otherwise these formulas are not defined, e.g. if the problem is a MIP or even if its functions are non-smooth (such as abs). – Nikos Kazazakis May 24 '21 at 09:36