1

Could you please teach me when an optimization model with fractional terms in the objective function can be linearized or solved optimally?

I only know that if the objective function has a single fractional term with a linear numerator and denominators can be linearized. But I wonder if (i) there is a summation of fractional terms or (ii) with a quadratic numerator and/ or denominator are quadratic can be linearized or de-fractioned.

RobPratt
  • 32,006
  • 1
  • 44
  • 84
  • Check this out: https://en.m.wikipedia.org/wiki/Linear-fractional_programming – Sutanu Majumdar Feb 21 '23 at 23:37
  • 2
    Sometimes such problems can be converted to second-order cone programming. See, for example, https://or.stackexchange.com/questions/8957/solving-maximization-problem-with-linear-fractional-sum – RobPratt Feb 21 '23 at 23:55
  • Convex quadratic divided by positive linear (affine) denominator is convex and can be formulated with SOCP constraint; and that includes a sum of such terms. That can be input using quad_over_lin in CVX or CVXPY. If quadratic is concave, that can be handled by taking the negative of it and using quaf_over_lin. – Mark L. Stone Feb 22 '23 at 01:30
  • @RobPratt Thank you so much. Do you know some valid references or any review papers categorizing variation of fractional models and the proven solution techniques. For example, I found these two: https://pubsonline.informs.org/doi/abs/10.1287/mnsc.13.7.492; https://onlinelibrary.wiley.com/doi/10.1002/nav.3800090303; – Reza Farahani Feb 22 '23 at 14:46
  • I have had great success using Dinkelbach's algorithm on some large MINLPs. A reference is https://www.sciencedirect.com/science/article/abs/pii/S0098135409001367. – Erwin Kalvelagen Apr 06 '23 at 18:39

2 Answers2

2

To supplement the answer by Nikos Kazazakis, I will add that it is always good modeling practice to squeeze as much convexity out of your model as possible, as to reduce degree of nonconvexity. Hence, for a rational objective contribution $\frac{f(x)}{g(x)}$ with polynomial numerator and denominator, $f(x)$ and $g(x)$, I suggest you start by reducing it using long polynomial division.

As example, given $f(x) = ax^2 + bx + c$ and $g(x) = gx + h$, you find that $$ \frac{f(x)}{g(x)} = rx + s + \frac{q}{gx+h}, $$ for some $r$ and $s$. In this simple case, the complexity has reduced from quadratic-over-linear to constant-over-linear, which is trivially representable using second-order cone programming on the convex part of the variable domain; see the modeling cookbook maintained by MOSEK for more details.

0

There are various ways to formulate this depending on context. For example, in the general case any program of type $\min \frac{f(x)}{g(x)}$ can be reformulated to:

$\min w\\\text{s.t.}\\w=\frac{f(x)}{g(x)}$

This can then be written as:

$\min w \\\text{s.t.}\\g(x)w=f(x)\\|g(x)|\geq \epsilon$

This can then be relaxed and solved with branch and bound. If we have more fractional terms, we simply add more auxiliary $w_i$ variables and constraints to match. In this example, if the range of $g(x)$ initially includes 0 we have to give something up by adding a finite tolerance, so this can not be an exact reformulation.

This is typically good enough because it's bad modelling to allow a model to acquire meaningful values close to singularities in the first place.

Global optimization solvers will do all of this automatically.

If one is feeling cheeky they might even decide the bilinear syntax is fine and not to add the absolute value constraint, if that happens to work out in hindsight, even though that's not mathematically correct. For instance, we can solve without the $\neq 0$ constraint and check whether the result actually evaluates to $g(x)=0$. If it doesn't, sweet, we're done.

SecretAgentMan
  • 1,895
  • 2
  • 13
  • 39
Nikos Kazazakis
  • 12,121
  • 17
  • 59