0

Say we run a 3 (A: A1,A2,A3) x 3 (B: B1,B2,B3) repeated measures ANOVA. A significant p value for factor A, for example, indicates that there is at least one pair (A1-A2,A1-A3,A2-A3) in which the mean difference is statistically significant (let's assume there are no interaction effects). Normally, to determine the specific pair(s) where a significant difference exists, post-hoc tests (multiple comparisons) are used, which will include some type of correction (e.g., Bonferroni, Tukey).

There is an alternative method, which is running the ANOVA first and then running follow-up pairwise t-tests comparing factor levels. I'm wondering whether these two alternatives are equally valid from a statistical point of view.

And a second related question: what would the correct interpretation be in a scenario where a main effect is found for a particular factor (or the interaction), but post-hoc comparisons are all non-significant?

  • 1
    Welcome to CV, Mikel. There are some subtleties here, beginning with the fact that a significant model overall is not equivalent to the existence of any significant differences among pairs. It would likely help to understand that before considering the rest of your question. – whuber Jul 05 '23 at 15:47
  • Hi whuber, I assume then that a significant p value for factor A does not indicate significant mean differences between at least one pair (A1-A2,A1-A3,A2-A3) of the factor levels? I'd really like to know about the subtleties. Thanks! – Mikel Jimenez Jul 09 '23 at 10:44
  • Here are some threads to start with: https://stats.stackexchange.com/questions/24720, https://stats.stackexchange.com/questions/552472, https://stats.stackexchange.com/questions/547107, https://stats.stackexchange.com/questions/192148 (read the comments). Others like these can be found with this site search. – whuber Jul 09 '23 at 17:05

0 Answers0