1

Let

  • $P_{t,u}; t=1,2,\ldots,T, u=1,2,\ldots,U$ be known values
  • $\alpha$ is also a known parameter
  • $X_{t,u}$ an optimization variable

I have the following constraint: IF $P_{t,u}\geq\alpha$, THEN $X_{t,u}=1$ and $X_{t',u}=0, t'\neq t$

How can I linearize this constraint?

RobPratt
  • 32,006
  • 1
  • 44
  • 84
KGM
  • 2,265
  • 7
  • 23

3 Answers3

5

Because $P_{t,u}$ and $\alpha$ are known constants (not decision variables), no linearization is needed. In a modeling language, it would look like this:

con Mycon1 {t in 1..T, u in 1..U: P[t,u] >= alpha}:
   X[t,u] = 1;
con Mycon2 {t in 1..T, u in 1..U, tp in 1..T diff {t}: P[t,u] >= alpha}:
   X[tp,u] = 0;

Some languages support this equivalent form with a single constraint declaration:

con Mycon {t in 1..T, u in 1..U, tp in 1..T: P[t,u] >= alpha}:
   X[tp,u] = (if tp = t then 1 else 0);

Even more compact:

con Mycon {t in 1..T, u in 1..U, tp in 1..T: P[t,u] >= alpha}:
   X[tp,u] = (tp = t);

Some languages also support a FIX statement for equality constraints with one variable:

for {t in 1..T, u in 1..U, tp in 1..T: P[t,u] >= alpha}
   fix X[tp,u] = (tp = t);

An alternative to declaring binary variables and then forcing them to 0 is to use a sparse index set, as demonstrated here.

RobPratt
  • 32,006
  • 1
  • 44
  • 84
2

If I got the question correct you don't even need a constraint. You can simply define the lower and upper bounds on your variables $X_{t,u}$ based on the parameters. For instance:

x[t,u] = model.addVar(lb = 1 if P[t,u] >= alpha else 0, ub = 1 if P[t,u] >= alpha else 0, vtype="B")

If there isn't a $t$ for all $u$ that fulfills $P_{t,u} \geq \alpha$ you can define a new set $K:=\{(t,u)| P_{t,u} \geq \alpha \quad \forall t\in T, u \in U\}$ and set the bounds accordingly to the set.

0

It is possible to translate into a linear programming formulation the following constraint:

If $ P_{t,u} \geq \alpha \rightarrow x_{t,u} =1$ and $x_{t’,u}=0 $ for all $t’=1,2, …, T$ with $t’\neq t$. Let introduce $ T \cdot U $ Boolean variables: $ x_{t,u} $

Remembering that $ P_{t,u} \cdot \alpha^{-1}=P_{t,u} \cdot \frac{1}{\alpha} \geq 1 $ if and only if $ P_{t,u} \geq \alpha$. So, the generic constraint

$ x_{t,u} \geq P_{t,u} \alpha^{-1} \rightarrow x_{t,u}=1 $

answers to our problem:

Now we want to assign zero value to every remaining variables: it is sufficient to introduce the following constraint:

$ \sum_{t=1}^T x_{t,u} = 1 $

In general we introduce the following contraints as feasible region:

$\left\{ \begin{array}{l} x_{1,1} \geq P_{1,1} \alpha^{-1} \\ x_{2,1} \geq P_{2,1} \alpha^{-1}\\ \vdots \\ x_{T,1} \geq P_{T,1} \alpha^{-1} \\ \sum_{t=1}^T x_{t,1} = 1 \\ \vdots \\ x_{1,U} \geq P_{1,U} \cdot \alpha^{-1} \\ x_{2,U} \geq P_{2,U} \alpha^{-1} \\ \vdots \\ x_{T,U} \geq P_{T,U} \alpha^{-1} \\ \sum_{t=1}^T x_{t,U} = 1 \\ x_{t,u} Boolean \\ \end{array} \right. $

marco tognoli
  • 413
  • 2
  • 7
  • If $P_{t,u}>0$ for all $t,u$ and $\alpha>0$, then your inequality constraints $x_{t,u} \ge P_{t,u} \alpha^{-1}>0$ would imply that $x_{t,u}=1$ for all $t,u$. – RobPratt Sep 16 '20 at 20:57
  • @RobPratt, you are right! We can introduce also $x_{t,u} \cdot (P_{t,u} \alpha^{-1} -1 ) \geq 0 $ in order to force $x_{t,u}=0$ whenever $P_{t,u} < \alpha $ – marco tognoli Sep 18 '20 at 15:33
  • My point is that your proposed constraints are too strong. – RobPratt Sep 18 '20 at 15:37