13

I have a set of linearized constraints that are modelled using big-Ms. Now, it is, of course, common knowledge to make the value of M and small as possible in order to provide tighter LP relaxations of the (e.g.) MIP we are solving.

I am looking for some examples where this tightening of the 'M' is really useful from a computational perspective. As often by my experience, the smallest value of M is so trivial that it does not really influence computational performance (Equal to the cardinality of a set; maximum length of a time horizon etc. )

LarrySnyder610
  • 13,141
  • 3
  • 41
  • 105
Albert Schrotenboer
  • 1,859
  • 12
  • 27

4 Answers4

10

The bigger the big-M is, more likely the numerical issues will happen with solvers.

If you have right hand sides around $10^{10}$ and objective function coefficients in the range of $10^{-2}$, then solvers will have hard time dealing with such big range of values. And big-M's are the usual suspects in such situations.

So smaller the big-M, tighter and more numerically stable the matrix is.

One option when dealing with such big bigM values is using indicator constraints. They are great ways to write if->then type of logic in MIP programming. See examples of indicator constraints for Xpress Mosel here.

Baykal
  • 201
  • 1
  • 6
9

I often see people set $M$ to something like $10^{12}$, when the rest of the model is on the order of $10^2$, because they got the message that $M$ should be "a large constant". Reducing $M$ to something several orders of magnitude smaller then does have a noticeable impact on the run time.

My point is: Once you know that you should should be careful about choosing $M$, it might not make much difference if you set it to $10^3$ or $10^4$, and it might not be worth investing too much time to get it just right.

But if you are still in the frame of mind that says "set $M$ as large as you want", there can be a huge difference when you bring $M$ down from the stratosphere and make it something reasonable.

LarrySnyder610
  • 13,141
  • 3
  • 41
  • 105
6

The practical study Analysis of Strength and Weaknesses of a MILP Model for Revising Railway Traffic Timetables includes an analysis of the influence of big M constraints. The conclusion is mixed, though: in their model, knowledge of sharp M-values has a notable effect, but sharp values are obviously hard to find in practice.

Marcus Ritt
  • 2,715
  • 13
  • 35
5

I've run into issues with this for supply chain problems choosing which facilities to open. In my model, trucks to deliver to customers could only come from open facilities, so big M had to be larger than the total number of trucks leaving the facility.

For my real-world problem, there were more than 10,000 trucks (my original choice of M).

Zohar Strinka
  • 747
  • 4
  • 13
  • So; did you notice significant computational differences if you set the M larger than its smallest value? If so; I would be interested in the model :) – Albert Schrotenboer Jun 01 '19 at 23:02
  • I just gave it a test (this was work for a client of mine). The initial test for that M in the model made no difference time-wise whether it was the minimum value vs being 100 times that value. I tried blowing up all three big Ms in my model, but saw no difference in computational time for my particular problem instance. – Zohar Strinka Jun 01 '19 at 23:29
  • I will add - choosing M not big enough for some of my client's instances did cause problems. So in the future I'll be erring on the side of a bigger M than I think I need. – Zohar Strinka Jun 01 '19 at 23:30
  • I usually set M at a value 2-3x the largest potential LHS value. That helps avoid computational issues while making sure the constraint does its job – Ralph Jun 03 '19 at 22:00
  • The lesson for me was mostly a reminder that customers needs may change. The initial M was plenty, but then the data changed :) – Zohar Strinka Jun 03 '19 at 22:07