5

I have a finite probability distribution $p_1, p_2, \ldots, p_n$ (but these are variables of an optimization problem). Moreover, we have monotonicity, $p_1 \geq p_2 \geq \cdots \geq p_n$.

Assume we already constrained that at least one of the $p_i$ values is zero. I want to keep track of where the zero's start.

An obvious approach is to introduce a new binary variable $b_i$ for all $i=1,\ldots,n$ and say:

\begin{cases} b_i \geq p_i \\ M\cdot p_i \geq b_i \end{cases} and this system ensurses $p_i = 0 \iff b_i = 0$ and $p_i > 0 \iff b_i = 1$.

The issue is, I cannot define big-M, because the smallest non-zero probability can be arbitrarily small. Assume $M = 10^8$ for example, and $p_i = 10^{-10}$. Then, the second constraint above will have $M \cdot p_i = 10^{-2} < 1$ hence this constraint will force $b_i = 0$. Therefore the above system will implicitly constrain $p_i \geq 10^{-8}$ for all $i = 1,\ldots, n$.

I don't want this. Is there any alternative for this approach without using $M$? I thought maybe monotonicity helps, because I can consider the sum of $p_i$'s and when it hits $=1$ it means the zeros are starting.

Note: To overcome this, sometimes dropping the big-M constraint while keeping the first one and minimizing the sum of $b_i$s help. However, I cannot minimize the sum of $b_i$s because the problem has another objective that is a function of $p_i$s, and I will use $b_i$s to add further constraints over $p$.

independentvariable
  • 3,980
  • 10
  • 36
  • What are you going to do with these $b_i$ variables elsewhere in the model? – RobPratt Mar 23 '22 at 02:13
  • @RobPratt thank you for your question! I will basically say that very lats two positive probabilities must sum to something less than, say, $0.1$.

    For example, if we know that $p_i = 0$ starts when $i = n-1$, I want to say $p_{n-2} + p_{n-3} \leq 0.1$.

    – independentvariable Mar 23 '22 at 02:19

1 Answers1

3

You are going to bump into a limitation of finite-precision floating point arithmetic. Basically, a small enough probability value will be indistinguishable from rounding error. Assuming you are using a MIP solver, the solver will have a tolerance value $\epsilon > 0$ such that any constraint whose left-side evaluates to withing $\epsilon$ of the right side is considered satisfied. The same is true for sign restrictions (possibly using a different $\epsilon$). You may be able to set the value of $\epsilon,$ but you may not be allowed to set it to 0 (and in any case setting it to 0 is not advisable).

So you can change your second constraint to $p_i \ge \epsilon \cdot b_i.$ While it still may artificially exclude tiny probabilities, at least you will know that you are making the minimum possible restriction to the feasible region.

prubin
  • 39,078
  • 3
  • 37
  • 104