The answer for almost all questions that I see here regarding multiple comparison 'corrections' such as Bonferroni is that the desirability of their application depends on things that are usually not mentioned in the question! That means that any really accurate and balanced answer has to be very long. I will not make this long enough, but will point you to my best attempt long-form answer: A Reckless Guide to P-values : Local Evidence, Global Errors
What is the nature of your study and what are your inferential objectives? Is the study a preliminary one that might be though of a 'hypothesis generating' or is it intended to be a standalone 'definitive' account? You might be more interested in the evidential meaning of the data more than the long run error rate consequences of your statistical procedures.
The controversy that you mention might well be a consequence of people being unwilling to imagine that not every user of statistical approaches share their particular purposes and circumstances.
Are the null hypotheses of the several tests the same, or related, or independent? Are any of the data shared across tests?
'Corrections' for multiplicity always come at the cost of reduced power. In other words, they trade off type II errors for extra protection against a category of type I errors. Given your inferential objectives, is that trade-off going to render your designed balance of false positive and false negative errors undesirable? Did you design that balance with the 'correction' in mind? Did you design that balance at all, or are you relying on the arbitrary p<0.05?