In Borenstein et al's (2009) Introduction to meta-analysis, page 263, one can read the following:
A power analysis should be performed when the review is being planned, and not after the review has been completed. Researchers sometimes conduct a power analysis after the fact, and report that Power was low, and therefore the absence of a significant effect is not informative. While this is correct, it is preferable to address the same question by simply reporting the observed effect size with its confidence interval. For example, The effect size is 0.4 with a confidence interval of -0.10 to +0.90 is much more informative than the statement that Power was low. The statement of effect size with confidence intervals not only makes it clear that we cannot rule out a clinically important effect, but also gives a range for what this effect might be (here, as low as -0.10 and as high as +0.90).
I don't quite see the logic behind this. Let's say that the numbers above were based on a sample that yielded 99.9% power to find a statistical significant effect. In that case, if I were a reader that were to make an informed decision of whether the effect was real or not, I would much rather get the information that the power was really high but no significant effect could be found rather than just receiving the estimated mean and a confidence interval. Same thing if you reverse the situation; if the numbers above were based on a sample that yielded 0.1% power to find a statistical significant effect, I would like to know this.
In the first scenario, if I got information regarding power, I would be able to guess that the effect probably isn't real, and in the second scenario I could discern that it probably would be worth to continue looking for the effect in the future.
What am I missing here?