3

This is a follow-up post regarding this one. I deleted this new post once before, as I was unhappy with the formulation. I have the following basic nurse scheduling MILP, which tries to cover the daily demand $demand_{ts}$.

$$\min \sum_t \sum_s slack_{ts}$$ $$\text{subject to} \sum_i x_{its} + slack_{ts} = demand_{ts}\forall t,s$$ $$\sum_s x_{its} \le 1 \forall i,t$$ $$x_{its} \in\left\{ 0,1 \right\} \forall i,t,s$$ $$slack_{ts} \ge 0 \forall t,s$$

After decomposing according to the Dantzig Decomposition, this yields the following Master problem $(MP)$:

$$\text{minimize} \sum_t \sum_s \text{s}_{ts}$$ $$\text{subject to} \sum_i \sum_r x_{its}^r \lambda_{ir} + \text{s}_{ts} = d_{ts} \forall t,s $$ $$\sum_r \lambda_{ir} = 1 \forall i $$ $$\lambda_{ir} \in\mathbb{Z}^+ \forall i,r$$ $$\text{s}_{ts} \ge 0 \forall t,s$$

and sub problem $(SP(i))$:

$$\text{minimize } -\sum_{t,s} \pi_{ts} x_{its} - \mu_i $$ $$\text{subject to} \sum_s x_{its} \le 1 \forall t$$ $$x_{its} \in\left\{ 0,1 \right\} \forall t,s$$

So far so good. Now, I want to incorporate individual motivation ($motivation_{its}$), which can be seen as the performance during each shift ($motivation_{its}$ is influenced by the daily mood $mood_{it}$. If it is smaller than one, there is more $slack_{ts}$. This motivation should now be included in the demand constraint (instead of $x_{its}$). The new (full) problem looks like this:

$$\text{minimize} \sum_t \sum_s slack_{ts} $$ $$\sum_{i}^{}motivation_{its}+slack_{ts}=demand_{ts}\forall t,s$$ $$mood_{it} + M\cdot (1-x_{its}) \geq motivation_{its} \geq mood_{it} - M\cdot (1-x_{its})\quad \forall i,t,s$$ $$motivation_{its} \leq x_{its}\quad \forall i,t,s$$ $$mood_{it}=1-\alpha_{it}\cdot \sum_s x_{its}\quad \forall i,t\\ \alpha_{it}\sim U(0,1)\quad \forall i,t$$ $$x_{its}\in \{0,1\}$$ $$mood_{it}, motivation_{its}\in[0,1]$$

Now I have the following question. Can I still only include the demand constraint in the MP and move the other new ones to the SP(i) or is that not possible because they are "linked"? Especially about the initialization of the GC, where the SP(i) has not yet been solved and no solutions for $mood_{it}$ and therefore also no $motivation_{its}$ values are obtained. How do I have to adapt my CG model so that I still only have the demand constraint in the MP and the rest in the SP(i)?

SAS Code:

proc optmodel;
   /* declare sets and parameters */
   set ISET, TSET, SSET;
   num demand {TSET, SSET};

/* read input data here / set ISET = {1, 2, 3}; set SSET = {1, 2, 3}; set TSET = {1, 2, 3, 4, 5, 6, 7}; num demand{TSET, SSET} = {(1, 1): 2, (1, 2): 1, (1, 3): 0, (2, 1): 1, (2, 2): 2, (2, 3): 0, (3, 1): 1, (3, 2): 1, (3, 3): 1, (4, 1): 1, (4, 2): 2, (4, 3): 0, (5, 1): 2, (5, 2): 0, (5, 3): 1, (6, 1): 1, (6, 2): 1, (6, 3): 1, (7, 1): 0, (7, 2): 3, (7, 3): 0}; / Generate random values for alpha / execute INIT_RANDOM(123); / Set a seed for reproducibility */ alpha{i in ISET, t in TSET} = rand("Uniform", 0, 1);

/* declare decision variables */ var motivation {ISET, TSET, SSET} >= 0 <= 1; var slack {TSET, SSET} >= 0; var mood {ISET, TSET} >= 0 <= 1; var x {ISET, TSET, SSET} binary;

/* declare objective */ minimize z = sum {t in TSET, s in SSET} slack[t,s];

/* declare constraints */ con SatisfyDemand {t in TSET, s in SSET}: sum {i in ISET} motivation[i,t,s] + slack[t,s] = demand[t,s];

con Indicator {i in ISET, t in TSET, s in SSET}: x[i,t,s] = 1 implies motivation[i,t,s] = mood[i,t] suffixes=(block=i);

con MotivationImpliesX {i in ISET, t in TSET, s in SSET}: motivation[i,t,s] <= x[i,t,s] suffixes=(block=i);

con AlphaMood {i in ISET, t in TSET}: alpha * sum {s in SSET} x[i,t,s] + mood[i,t] = 1 suffixes=(block=i);

/* call MILP solver with Dantzig-Wolfe decomposition algorithm */ solve with milp / decomp;

/* write output data here */ quit;

nflgreaternba
  • 99
  • 1
  • 8
  • Please clarify whether $\text{mood}_{it}$ is an input parameter or a decision variable. Does $U(0,1)$ mean a standard uniform random variable? – RobPratt Feb 02 '24 at 20:53
  • I just updated my post, as I forget to add something. Yes, $U(0,1)$ is standard uniform random variable – nflgreaternba Feb 02 '24 at 21:07
  • Thank you for the update, but I still don't understand whether mood is an input parameter or a decision variable. And is motivation a nonnegative decision variable? And is $x$ still a binary decision variable? The full problem needs to be completely specified before attempting column generation. – RobPratt Feb 02 '24 at 21:16
  • $x$ is still binary and $mood$ is a decision variable – nflgreaternba Feb 02 '24 at 21:53

1 Answers1

3

Your problem is: \begin{align} &\text{minimize} &\sum_t \sum_s \text{slack}_{ts} \\ &\text{subject to} &\sum_i \text{motivation}_{its} + \text{slack}_{ts} &= \text{demand}_{ts} &&\forall t,s \\ &&-M(1-x_{its}) \le \text{motivation}_{its} - \text{mood}_{it} &\le M(1-x_{its}) &&\forall i,t,s \\ &&\text{motivation}_{its} &\le x_{its} && \forall i,t,s \\ &&\alpha_{it} \sum_s x_{its} + \text{mood}_{it} &= 1 &&\forall i,t\\ &&\text{motivation}_{its} &\in[0,1] &&\forall i,t,s \\ &&\text{slack}_{ts} &\ge 0 &&\forall t,s \\ &&\text{mood}_{it} &\in[0,1] &&\forall i,t \\ &&x_{its}&\in \{0,1\} &&\forall i,t,s\\ \end{align}

For the Dantzig-Wolfe reformulation, introduce $\lambda_{ir} \ge 0$ and substitute $\text{motivation}_{its} = \sum_r \text{motivation}_{its}^r \lambda_{ir}$ in the (objective and) complicating constraints to obtain the master problem: \begin{align} &\text{minimize} &\sum_t \sum_s \text{slack}_{ts} \\ &\text{subject to} &\sum_i \sum_r \text{motivation}_{its}^r \lambda_{ir} + \text{slack}_{ts} & = \text{demand}_{ts} &&\forall t,s &&\text(\text{$\pi_{ts}$ free})\\ &&\sum_r \lambda_{ir} &= 1 &&\forall i &&\text(\text{$\mu_i$ free})\\ &&\lambda_{ir} &\in\mathbb{Z}^+ &&\forall i,r\\ &&\text{slack}_{ts} &\ge 0 &&\forall t,s \end{align} The reduced cost of $\lambda_{ir}$ is $0-\sum_{t,s} \pi_{ts} \text{motivation}_{its}^r - \mu_i$, so the subproblem for block $i$ is: \begin{align} &\text{minimize} &0-\sum_{t,s} \pi_{ts} \text{motivation}_{its} - \mu_i \\ &\text{subject to} &-M(1-x_{its}) \le \text{motivation}_{its} - \text{mood}_{it} &\le M(1-x_{its}) &&\forall t,s \\ &&\text{motivation}_{its} &\le x_{its} && \forall t,s \\\ &&\alpha_{it} \sum_s x_{its} + \text{mood}_{it} &= 1 &&\forall t\\ &&\text{motivation}_{its} &\in[0,1] &&\forall t,s \\ &&\text{mood}_{it} &\in[0,1] &&\forall t \\ &&x_{its}&\in \{0,1\} &&\forall t,s\\ \end{align}

When you solve the LP relaxation of the restricted master problem, be sure to impose $\lambda_{ir} \ge 0$ (instead of $\lambda_{ir} \in [0,1]$), as discussed in Variable bounds in column generation.

You can initialize with trivial columns (one per block $i$) that correspond to $\text{motivation}_{its}=x_{its}=0$ and $\text{mood}_{it}=1$. The first master solve will thus yield $\text{slack}_{ts}=\text{demand}_{ts}$ and $\lambda_{ir}=1$.


By request, here is SAS code that uses automated Dantzig-Wolfe decomposition:

proc optmodel;
   /* declare sets and parameters */
   set ISET, TSET, SSET;
   num alpha {ISET, TSET};
   num demand {TSET, SSET};

/* read input data here */

/* declare decision variables */ var motivation {ISET, TSET, SSET} >= 0 <= 1; var slack {TSET, SSET} >= 0; var mood {ISET, TSET} >= 0 <= 1; var x {ISET, TSET, SSET} binary;

/* declare objective */ minimize z = sum {t in TSET, s in SSET} slack[t,s];

/* declare constraints */ con SatisfyDemand {t in TSET, s in SSET}: sum {i in ISET} motivation[i,t,s] + slack[t,s] = demand[t,s];

con Indicator {i in ISET, t in TSET, s in SSET}: x[i,t,s] = 1 implies motivation[i,t,s] = mood[i,t] suffixes=(block=i);

con MotivationImpliesX {i in ISET, t in TSET, s in SSET}: motivation[i,t,s] <= x[i,t,s] suffixes=(block=i);

con AlphaMood {i in ISET, t in TSET}: alpha[i,t] * sum {s in SSET} x[i,t,s] + mood[i,t] = 1 suffixes=(block=i);

/* call MILP solver with Dantzig-Wolfe decomposition algorithm */ solve with milp / decomp;

/* write output data here */ quit;

RobPratt
  • 32,006
  • 1
  • 44
  • 84
  • Thank you so much. One last question. What do you mean with $\mu,\pi$-free? – nflgreaternba Feb 04 '24 at 10:12
  • Like in the previous post, those are dual variables associated with equality constraints, so they are free/unbounded/unrestricted in sign. – RobPratt Feb 04 '24 at 14:49
  • Thanks. As there are $\mid I\mid$ subproblems and they are individual, can I drop the $i$ in every constraint? – nflgreaternba Feb 05 '24 at 16:17
  • Yes, for subproblem $i$ I have dropped the $i$ in the constraint index. You can also drop the $i$ from the subproblem variable indices if you want. But you must keep the $i$ in the parameters $\mu_i$ and $\alpha_{it}$. – RobPratt Feb 05 '24 at 17:11
  • And why is that. Why can I drop $i$ everywhere apart from $\alpha_{it}$ and $\mu_i$. Would that mean, that when i implement the problem in a commercial solver, i create a subproblem for every $i\in I$ using a for loop. Would that mean that my SP objective after the for loop would be $\min -\sum_{t,s}\pi_{ts}motivation^r_{ts}-\mu_i$ (the i would still be there?). – nflgreaternba Feb 05 '24 at 17:58
  • For subproblem $i$, the index $i$ is fixed. If every variable has an $i$, it's as if no variable has an $i$. Yes, that is the correct objective for subproblem $i$ if you omit the $i$ index in the variables. – RobPratt Feb 05 '24 at 18:39
  • I see. You can't omit the index of $\mu_i$, because it has nothing to do with the SP itself, but rather is the dual from the second $\lambda$ constraint from the MP, correct? Also, I just read somewhere, that the MP is not linear but rather a MIQCP, because the multiplication of $\lambda\cdot motivation$. Is that true or is the model above still linear? – nflgreaternba Feb 05 '24 at 22:51
  • The model is linear. When motivation is multiplied by $\lambda$, it is not a variable but rather the fixed value of motivation in column $r$. – RobPratt Feb 05 '24 at 23:32
  • Okay great. And dont i have to add $\lambda_{ir}$ somewhere multiplied with the column cost in the objective of the MP aswell? Something like $\sum_{t,s}slack_{ts} +\sum_{i,r}\lambda_{ir}$ – nflgreaternba Feb 06 '24 at 15:32
  • You would if motivation appeared in the original objective, but it doesn't. – RobPratt Feb 06 '24 at 16:30
  • And obviously, since in am interested in the concluding schedule, i am interested in $x_{its}$. How would i get the optimal values? After the last iteration of solving the SP which leads to optimality, do i then extract the values from the last SP iteration? – nflgreaternba Feb 07 '24 at 10:16
  • To recover optimal $x$ values, you can either store them each time you generate a new column or solve the original problem with the motivation variables fixed to their optimal values prescribed by $\lambda$. – RobPratt Feb 07 '24 at 13:30
  • Do I then have to create an extra index for $R$ in the code implementation or is the $^r$ just for illustration purposes? If an implementation is necessary, how does this index work, what is the range? – nflgreaternba Feb 08 '24 at 19:31
  • The index sets for columns are dynamically generated. You increment $r$ for block $i$ each time you find a new column for block $i$. – RobPratt Feb 08 '24 at 20:20
  • Super thanks. Do you happen to know how to implement this in Gurobi/CPLEX? So the model and the column generation? Would be super grateful for your help – nflgreaternba Feb 08 '24 at 23:04
  • I work for SAS, and we have an automated implementation of Dantzig-Wolfe decomposition: https://documentation.sas.com/doc/en/pgmsascdc/v_047/casmopt/casmopt_decomp_overview.htm – RobPratt Feb 08 '24 at 23:24
  • Would you mind helping me implementing it there? And one last general question. As the MP is not an IP, is the relaxation of the MP, which is usually the first step, even necessary? – nflgreaternba Feb 09 '24 at 14:28
  • The master is an IP, and the first step is to solve its LP relaxation via column generation. If the resulting solution is fractional, you would then need to branch ("branch-and-price"). The SAS code I just added to my answer does all of this automatically for you. – RobPratt Feb 09 '24 at 15:12
  • Thank you so much, but isnt $motivation^r_{its}$ continuous? – nflgreaternba Feb 09 '24 at 15:53
  • Yes, corrected. – RobPratt Feb 09 '24 at 16:08
  • Assuming all nurses are the "same" do I need to solve every Subproblemn or only one? Would I need to modify the model formulation then? – nflgreaternba Feb 10 '24 at 22:28
  • If $\alpha_{it}$ does not depend on $i$, you can solve one subproblem and use a common column pool, but then you would need to compute $-\sum_{t,s} \pi_{ts} \text{motivation}_{ts} - \min_i \mu_i$ to determine whether the reduced cost is negative. – RobPratt Feb 10 '24 at 23:52
  • Thank you again. Would your formulation be the new objective function and why is the new calculation necessary? And would I also need to translate the decision variable to be an integer, as can be seen in this paper (10.1002/nav.20201) with identical nurses or is your current formulation sufficient as $motivation$ is continuous and not binary as in the paper? – nflgreaternba Feb 11 '24 at 12:07
  • This has been a lot of questions for one question. :) If you want to ask about ways to exploit identical subproblems in column generation, please open a new question. – RobPratt Feb 11 '24 at 15:34
  • I still have one question. You suggest that I initialize the RMP with $motivation^{r=1}_{its}~\forall i,t,s$. So the restricted master problem in the case of $\mid I \mid = 3$ consists of three columns, right? And i also tried your code, but i cant seem to get it working. See OP. Thanks – nflgreaternba Mar 04 '24 at 10:40
  • Yes, one initial column per block. Please open a new question for your SAS code, and I'll reply with the corrected code. – RobPratt Mar 04 '24 at 15:55
  • By the way, your problem can be solved separately for each $t$. Did you omit some linking constraints across $t$? – RobPratt Mar 04 '24 at 16:16
  • Thanks, I will ask shortly. Regarding your second point, what do you mean by that? – nflgreaternba Mar 04 '24 at 19:05
  • All of your variables and constraints have a $t$ index, and so your problem completely decomposes into independent problems, one for each $t$, which you can solve separately even without Dantzig-Wolfe. – RobPratt Mar 04 '24 at 19:10
  • I did not know that, but isnt the demand constraint the linking one? – nflgreaternba Mar 04 '24 at 19:16
  • The demand constraint links across $i$ but involves variables for only one $t$ at a time. – RobPratt Mar 04 '24 at 19:50
  • Ah I see. I also added this constraint to the SP's.$\sum^{u+Max_W}{u=t}\sum{s\in S}^{}x_{ius}\geqslant Max_W\forall i\in I, t\in {1,2,\ldots,T-Max_W}$. Then a decomposition is necessary, correct? – nflgreaternba Mar 04 '24 at 21:00
  • Yes, that kind of constraint links different $t$ values. I suspect you meant $t+$ instead of $u+$ in the upper summation index. – RobPratt Mar 04 '24 at 21:09
  • I see, thanks. So the general approach while implementing would be to extract the optimal values $motvation_{its}^*$ from the SPs and then assign the value to each of the respective $(t,s)$ demand constraint, multiplied with $\lambda$ for each $i$ and $r=1+~\text{current iteration}$. Correct? And the the sum of all $\lambda$`s for $i$ must be equal to one. – nflgreaternba Mar 06 '24 at 06:23
  • Yes, the optimal solution values from the subproblems become constraint coefficients in the corresponding master constraints. – RobPratt Mar 06 '24 at 13:30
  • Thank you. And no values from $motivation$ are passed to the target function, since $motivation$ does not occur there, right? – nflgreaternba Mar 06 '24 at 16:40
  • Correct. For your problem, the master objective function does not change when you generate new columns. – RobPratt Mar 06 '24 at 16:50
  • If I then find no more columns and then finally the model with $\lambda \in \mathbb{Z}^+_0$, does the model then also provide the globally optimal solution or do I still have to do something like Branch-n-Price? – nflgreaternba Mar 07 '24 at 19:35
  • If you are lucky, price-and-branch will yield an optimal solution, but generally you need branch-and-price: https://or.stackexchange.com/questions/8594/integer-column-generation-without-branch-price – RobPratt Mar 07 '24 at 20:07
  • If I understand this correctly, then it is possible that the solution found by CG differs from the optimal solution of the relaxed LP. But does a price-n-branch approach find the same solution as without decomposition? So should the solution be the same if I solve the model without decomposition with a solver and if I solve it with CG? Because in my Gurobi implementation it happens for certain seeds (for the calculation of $\alpha$) that the objective value is slightly lower with simple implementation than when I solve it with CG (mostly $<1%$). However the objective values are often the same – nflgreaternba Mar 09 '24 at 20:59
  • There is no reason to expect the solutions obtained from different methods to be the same. – RobPratt Mar 10 '24 at 04:32
  • Okay, thanks. To finally understand it. The classical Gurobi implementation solves the problem to optimality, while CG can find the optimal solution but cannot guarantee it. If I want to always find it with CG, then I have to implement a branch-price approach, right? – nflgreaternba Mar 10 '24 at 14:49
  • If by CG you mean the price-and-branch heuristic, then yes. Note that both price-and-branch and branch-and-price use column generation, so I recommend not using the name "CG" to refer to one but not the other. – RobPratt Mar 10 '24 at 14:55
  • Is there any way I can check wether my solution found with the price-n-branch Algorithmus is also the global optimum, without having to solve the compact model? Or how can I calculate the gap to this globally optimal solution? – nflgreaternba Mar 21 '24 at 09:05
  • See https://or.stackexchange.com/q/3198/500 – RobPratt Mar 21 '24 at 12:47