7

I solve a nonlinear optimization problem of the form

\begin{align} &\max x_0 \text{ such that } \\ &\left[ \sum_{j=0}^N \left(\alpha_j x_0^{j} \prod_{k=3}^j x_k \right)\right]^2 + \left[ \sum_{j=1}^N \left(\beta_j x_0^{j} \prod_{k=3}^j x_k \right)\right]^2 \overset{!}{\leq} 1 \end{align}

where I know relatively tight bounds of the optimization variables $\boldsymbol x$, something between $0.1\% - 30 \%$ depending on the component. These stem from solutions for a different value of $N$. I can observe that for increasing the dimensionality $N \to N+1$, the values change only merely (this is where the bounds come from). However, the optimizer I am currently using (IpOpt) takes quite some time even for relatively small allowed deviations.

Thus my question: Is there a well-known way how to treat those long products of optimization variables, or are these just notoriously difficult problems?

I tested setting some variables as constant, but the constraint is very sensitive and even small deviations matter (this is also probably why this problem is hard). In that case, the optimization was often found infeasible.

Dan Doe
  • 297
  • 1
  • 8

1 Answers1

6

You are repeating the same expressions over and over again. It may be worthwhile to see if it is better to define: $$y_j = \begin{cases}y_{j-1}\cdot x_j & \text{for $j\gt 3$}\\ x_j & \text{for $j=3$}\end{cases} $$

and then use $y_j$ in this constraint. No guarantee it is better; could well be much worse as we add variables and equations.

Erwin Kalvelagen
  • 2,676
  • 1
  • 6
  • 11
  • 1
    Good point, this is actually where I come from. In that case, however, the latter $y$ become extremely small and one runs essentially out of precision. For the presented form, all optimization variables are more or less of same precision and the problem is numerically stable, while more expensive. – Dan Doe Jun 29 '22 at 12:34