Rules of thumbs for effect sizes have been widely criticized (see this discussion and this discussion for some examples). As you mention Cohen's d, be aware that Cohen himself advised against using the rules of thumb he suggested (for $d$ and other effect sizes). He considered it as a last resort when you have no other way of determining yourself the importance of a given value for an effect size. Unfortunately, his caveats are routinely ignored. In addition, I wonder if there are circumstances where you absolutely can't determine yourself the importance of an effect size; the context and goals of the study should give you the information you need for that. (Even if you were in a situation where you can't use context to make a judgement, it may be more advisable to simply report the value of the effect size without attaching any qualitative adjective to it.)
It is evident if, for example, you reason in terms of cost and benefits. If you have an effect size measuring the positive impact of an intervention on a population, you still have to take context into account to judge if the intervention is worth it. If the effect size is 1.5, but despite its positive impact it also results in huge unintended financial or social costs, the intervention may be much less desirable than another intervention with a smaller positive impact (e.g. an effect size of 0.1) but with very limited costs.
Using a general rule of thumb in this context does not help at all to judge the relevance of the effect size, and may even prevent you from thinking and drawing relevant conclusions from your study. Who cares if the effect size in the first intervention is labelled as "strong", if the second intervention has a smaller effect size but is more adequate? Labels are not really helpful here.