10

I am working on a Lagrangian relaxation for a minimization MIP.

Everything seemed to be working fine before I started to run a batch of instances.

Checking the log results for one of the instances I found out that the lower bound given by the LR algorithm was greater than my optimal objective.

One of my concerns, beyond wrong coding from my part, is the chance that the solver is removing columns or doing any other stuff to speed up optimization that is only feasible because of the relaxed constraints.

I saw this kind of problem before when a friend was implementing a branch and cut with cplex without changing a solver parameter.

Is there any parameters set that I should deactivate, like presolve, cutting planes etc?

PS: I posted a copy of this question on gurobi community, but thought it would be good to also ask here, as here we are more active and also could find opinions from non gurobi users.

seimetz
  • 513
  • 2
  • 9
  • 1
    How did you obtain the supposedly optimal objective value? Did you check the corresponding solution to confirm that it is actually feasible? – prubin Feb 20 '20 at 21:07
  • With the original MIP after some long runtime. Now I'm using this value as an initial upper bound for the subgradient – seimetz Feb 20 '20 at 21:17
  • 1
    So you are solving the subproblem using a MIP solver? If so you’d have to be very careful to get a true optimal solution (within a very small tolerance) otherwise the bound will not be valid. How far is the bound from the supposedly optimal solution? – LarrySnyder610 Feb 20 '20 at 21:49
  • Which bound? The relaxed model bound is very far... The LR best bound was actually good for smaller instances. For larger ones, as a proof of concept, I was solving with time limit, and to avoid invalid bounds I was using the Lagrangian problem lower bound just to check the algorithm flow faster – seimetz Feb 20 '20 at 22:32
  • If you terminate the solution of the relaxed MIP prematurely (time limit, memory limit, nonzero relaxation gap limit), the (relaxed) objective value you get may not be a valid lower bound for the original problem. I think that's what Larry was getting at in his comment. – prubin Feb 20 '20 at 23:07
  • You are right. But isn't that true only for the lag. function upper bound? Because If its optimal value is always lower than the original problem optimal objective, the lag. function lower bound would be a valid lower bound for the original problem. Not sure if I'm abusing the subgradient theory here with this simplification... – seimetz Feb 20 '20 at 23:37
  • Anyway, its important to tell that even solving the subproblem up to optimality I face this issue of the bigger bound – seimetz Feb 21 '20 at 01:11
  • Are you relaxing any equality constraints (that lead to free dual multipliers for the dualized constraints)? – ktnr Feb 21 '20 at 07:47
  • Yes! there is one of this type. Any numerical issues with them? – seimetz Feb 21 '20 at 18:10
  • @seimetz, I have the same issue to solve the problem I have, but in my problem neither exist big-m nor the solver parameters could repair the issue. Would you say please, as you mentioned ''With the original MIP after some long runtime. Now I'm using this value as an initial upper bound for the subgradient'', did you set the initial values of the variables by this too, or just to set the upper bound? If not, how to repair the infeasible solution in each iteration of the SB algorithm? – A.Omidi Feb 26 '22 at 07:35

1 Answers1

5

After the discussion here and a suggestion on my post at gurobi's community I'll post an answer for the forum records.

Concerning the presolve, I found out that in order to check if this is what is getting strange results some parameters to change are:

'Presolve':0,
'Cuts':0,

Turns out that on my problem the issue wasn't bad code of the subgradient method, neither the presolve phase.

I found that I tightened too much one of the big M in the formulation and this is what produced the subproblem with a strange bound.

Thanks everyone who asked more information and gave suggestions.

seimetz
  • 513
  • 2
  • 9
  • I have the same problem. The lower bound exceeds the best found/optimal solution in literature. I decomposed the MILP into three sub-problems with two Lagrangian multipliers. The method works well for small instances, however, I face the same problem for longer instances. Could you please inform me how you figure out that you tighten the big M? I do not have a big M in my formulation. So, wondering how can I find the bug of my algorithm! Was time limit an issue for your problem? Thank you in advance. – Aria Jun 26 '20 at 17:03
  • As far as debugging goes, if you fix the integer variables in the Lagrangian subproblems to their values in the best found or optimal solution, does it still produce a bound that exceeds the best known solution value? Is there an issue with alleged infeasibility? If either happens, close investigation of the fixed subproblems may point you to a formulation error or a numerical stability problem. – prubin Jun 27 '20 at 16:19