1

Do you have to normalize objectives when using the weighted sum approch when having multiple objectives?

Actually I thought that I should do it. But now I have run several experiments with different optimization models and the results reveal that there is no benefit when scaling the different objectives (by using the optimal value obtained from solving the single-objective problem). In contrary, I would even say that in my experiments, scaling the objective functions when using a weighted sum approach makes the results even worse, even when having objectives with different magnitudes.

PeterBe
  • 1,632
  • 2
  • 15
  • 30
  • 2
  • @Rob: Thanks for your answer. Actually, it does not really answer my question. I added additional information from my experience that scaling the objectives makes the optimization even worse (worse results and when using different weights, not a lot is changeging). – PeterBe Mar 09 '23 at 01:25
  • @PeterBe I'd combine the above answer with this answer https://or.stackexchange.com/questions/8307/how-to-transform-a-thermal-range-constraint-into-the-objective-function. Solve objectives separately then using optimal points as constraints minimize the total violations. – Sutanu Majumdar Mar 09 '23 at 01:55
  • @Sutanu:Thanks for your comment. Acutally I normalize by using the optimal solution but still I get bad results. So my impression is that the normalization is not necessary or even helpful – PeterBe Mar 09 '23 at 03:09
  • https://en.wikipedia.org/wiki/Multi-objective_optimization#No-preference_methods – Rob Mar 09 '23 at 22:52
  • @Rob: Thanks for the link. As Wikipedia is rarely a good scientific source, I'd like to know if you know a scientific study that investigated this problem with modern solvers like Cplex or Gurobi. I somehow have the impression that they can cope with the weighted sum approach without the need of normalizing the objectives. As stated in my quesiton, the experience with my models is that normalizing tends to make the results even worse and is definitely not beneficial. – PeterBe Mar 09 '23 at 23:09
  • @Rob: Thanks Rob for your comments. Any comment on my last comment? I'll highly appreciate every further comment from you. – PeterBe Mar 10 '23 at 23:28
  • My answer is "yes", and I've offered a reason; actually doing proper research and linking to several papers is more than I have time available ATM. Frequently copying a well written title (such as yours) into a search engine provides results at ArXiv which I use in my answers; a more qualified answer being "it depends". :) – Rob Mar 11 '23 at 16:31
  • 1
    @Rob: Thanks Rob for your answer. I really appreciate it. I'll go with the last part of the answer when you mentioned "it depends". In my several cases it was even worse when you normalized the goals even if they have different magnitudes. So for me it is definitely not a necessity to normalize the objectives. It can be helpful but it does not have to be (and it can even make the results worse) – PeterBe Mar 12 '23 at 21:56

2 Answers2

1

It depends on what you want to achieve, but I would like to argue, contrary to the answer by @Merve Özer, that you do not need to (explicitly) normalize the objectives.

If you have, two objective functions $f_1$ and $f_2$ and you want to bring $f_2$ into the same magnitude as $f_1$, you would find a constant, say $\alpha$, and multiply $f_2$ by this constant and obtain a new normalized objective $\tilde{f}_2(x)=\alpha f_2(x)$. Next you would create a weighted sum of $f_1$ and $\tilde{f}_2$, with weights $\tilde{w}_1,\tilde{w}_2>0$, and optimize this: \begin{equation} \min \tilde{w}_1f_1(x)+\tilde{w}_2\tilde{f}_2(x) \end{equation} This is simply the same as optimizing the weighted sum of the original objective functions with weights $w_1=\tilde{w}_1$ and $w_2=\alpha \tilde{w}_2$. Hence, you should "just" find good weights for the original objectives.

A further thing to observe is that some times it is advised to scale both objectives such that the set of non-dominated outcome vectors are contained in a (unit) hypercube. This will in many cases lead to numerical problems if larger values are "squeezed" into a small box.

This answer relies on two assumptions:

  1. You want a Pareto optimal solution to your multi objective optimisation problem, which is (only) guaranteed to be a supported efficient solution.

  2. It is often very difficult, even after normalizing the objectives, to predict the characteristics of the resulting solution obtained after optimising the weighted sum scalarisation. For some problems, even small changes to the weights will lead to very different solutions. For others, large changes does not change the solution much.

Sune
  • 6,457
  • 2
  • 17
  • 31
0

Yes, you need to normalize. It is not about weighting. If you don't, one of them dominates the others. The reason that you got worse result may be because you use weight coefficients improperly or the problem arises from something else. Generally, when I use weight coefficients, their sum is equal to 1.

  • Thanks Merve for your answer. For me the weight coefficients also always sum up to 1. Still the normalization does not lead to any benefit. I tried it on 5 different models and it is always the same. It seems that the normalization leads to worse results or at least it is not better. So I'd deduce that the normalization is not necessary when using the weighted sum approach – PeterBe Mar 09 '23 at 03:05
  • @AirSquid: Thanks Merve for your comments. Any comment on my last comment? I'll highly appreciate every further comment from you. – PeterBe Mar 10 '23 at 04:41