1

What does it mean to have a high p value but a very very low effect size? What can I conclude? Is it necessary to report the effect size when p=0.998?

CaroZ
  • 755
  • 2
    Have you constructed a confidence interval for the effect size? That may be helpful. – wzbillings Oct 31 '23 at 15:05
  • 4
    Why the "but" between "high p value" and "very very low effect size"?? Wouldn't a high p-value be expected to go with a low effect size? There's no hint of a contradiction that would suggest "but". – Glen_b Oct 31 '23 at 15:13
  • 1
    A high $p$-value is telling you the observations are close to those predicted by the null hypothesis, while a low $p$-value would suggest that the observations appear more extreme under that hypothesis. Seeing $99999$ heads when flipping a possibly biased coin $200000$ times ($p=0.9982$) does not guarantee the coin is unbiased, but does suggest that the extent of any bias is likely to be very small. – Henry Oct 31 '23 at 15:25
  • This page asks (and answers) essentially the same question: https://stats.stackexchange.com/questions/330713/justification-for-reporting-non-significant-effect-sizes/627783#627783 – Harvey Motulsky Oct 31 '23 at 18:11

1 Answers1

5

There is a strong case for reporting effect sizes alongside p-values, as they both represent two different things. The p-value indicates certainty/uncertainty, whereas the effect size tells you about the magnitude of the effect. You can read this article for example: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3444174/

And refer to this post:

https://www.researchgate.net/post/Do-I-need-to-report-effect-size-when-p-value-shows-not-significant-result

You could report both the effect size and p-value, saying that the difference between the two levels of your variable is small and non significant for example.

CaroZ
  • 755