The range that optimal value needs to avoid is taken as infeasible region through defining it as constraints but how to simultaneously optimize it such that its as far as possible?
-
3I am not sure what you want to achieve. If you want an objective function value, say $z^$, as far from a range $[a,b]$ as possible, you should first minimize your objective and measure the distance $z^{\min}=a-z^$ and then maximize the same objective and measure $z^{\max}=z^*-b$. Then pick the best one. – Sune Feb 23 '22 at 16:18
-
1What is your definition of "optimal value"? Are you just trying to get as far as possible from the forbidden range, without any concern for direction (meaning you do not care whether you are below or above the forbidden range)? – prubin Feb 23 '22 at 17:19
-
yeah, staying away from the range in either direction is good I think. Are there any standard techniques that suits for it better. The range is known and given. – almostKapil Feb 23 '22 at 19:13
-
@Sune could you elaborate more on the < first minimize Z* and calculated distance and again maximize Z* and measure Z^[max]. Is it a generally well known method and what is it called? – almostKapil Feb 23 '22 at 22:11
-
@prubin optimal value is any value that is not in the range we like to avoid. The result from an simulation gives us a range of values which we do not want to be in the defined range , in fact we want it to be as away as possible. the idea is to change the parameter of the simulation so that no output in the forbidden range values. – almostKapil Feb 23 '22 at 22:13
2 Answers
Expanding on @Sune's approach, you could do it one step by introducing a binary variable $y \in \{0,1\}$ that takes value $1$ if and only if $f(x) < a$.
Your objective function becomes: $$ \max \; |f(x)-ay - b(1-y) | $$
And you activate $y$ with big-M constraints such as: \begin{align*} b + \epsilon -My \le f(x) &\le a - \epsilon + M(1-y) \\ \end{align*}
- 13,324
- 1
- 23
- 56
-
If $y=0$, then $b<f(x)$ and if $y=1$ then $f(x) <a$. So how do you take care of the situation where $f(x) \in[a, b] $ for all feasible $x$ – Sune Feb 24 '22 at 19:34
-
I considered that this is infeasible (« the range that optimal value needs to avoid is taken as infeasible ») – Kuifje Feb 24 '22 at 20:29
-
I will expand on the comment I left on the question:
Suppose you have a function $f(x)$ who's value should be as far as possible from a known and given interval $[a,b]$. I will here define the distance $D(x)$ from $f(x)$ to the interval as $D(x)=\min\{\vert a-f(x)\vert,\vert b-f(x)\vert\}$ if $f(x)\not\in[a,b]$ and $0$ otherwise. So if $f(x)$ is smaller than $a$, then the distance to the interval is the absolute difference between $a$ and $f(x)$. If $f(x)$ is larger than $b$, it should be the absolute difference between $b$ and $f(x)$. If $f(x)\in[a,b]$ we simply say that the distance to the interval is zero.
Lets assume that there are some other constraints as well that define the set of feasible solutions. We call the set of feasible solutions $\mathcal{X}$. Then, if you want to find a feasible solution $x^*$ that maximizes $D(x)$ it could either be the case that $f(x)$ can be moved furthest away from the interval below $a$ or above $b$. To check this, you may first minimize the value of $f(x)$ over $\mathcal{X}$ to see how far below $a$ you can go: \begin{equation} x^{\min}\in\arg\min\{f(x):x\in \mathcal{X}\} \end{equation} Next you check how much above $b$ you can move $f(x)$ (if at all) by solving the optimization problem \begin{equation} x^{\max}\in\arg\max\{f(x):x\in \mathcal{X}\} \end{equation} Then, you simply pick the $x\in\{x^{\min},x^{\max}\}$ that maximizes your distance to the interval.
- 6,457
- 2
- 17
- 31
-
I am confused with why you would define
D(x) = min(...). And then later maximize it. Shouldn't it be max (D(x)) on either direction and just choose the farthest one ? – almostKapil Feb 27 '22 at 16:45