4

If I run a linear mixed model with the lme() function and get results like these (comparing the score of 4 treatment groups against a placebo group):

> summary(model <- lme(score ~ treatment * session + score_baseline, data, random = ~ 1 + session | ID, na.action = na.exclude, method = "REML"))    

Linear mixed-effects model fit by REML
 Data: data 
       AIC      BIC    logLik
  8356.717 8468.464 -4155.358

Random effects:
 Formula: ~1 + session | ID
 Structure: General positive-definite, Log-Cholesky parametrization
            StdDev    Corr  
(Intercept) 19.711759 (Intr)
session      4.814042 -0.404
Residual    13.233045       

Fixed effects: score ~ treatment * session + score_baseline 
                            Value Std.Error  DF   t-value p-value
(Intercept)              8.646642  6.976798 800  1.239342  0.2421
treatmentA         -13.230278  7.479517 152 -1.768868  0.0699
treatmentB           6.329995  7.532880 152  0.840315  0.5021
treatmentC          -3.114359  7.494865 152 -0.415532  0.7011
treatmentD         -12.449960  7.475326 152 -1.665474  0.0892
session                 12.722259  0.878443 800 14.482734  0.0000
score_baseline          1.108290  0.049313 152 22.474487  0.0000
treatmentA:session   4.399646  1.727357 800  2.547039  0.0132
treatmentB:session  -2.038450  1.727357 800 -1.180098  0.2913
treatmentC:session  -0.091602  1.729462 800 -0.052966  0.9821
treatmentD:session   3.072027  1.727357 800  1.778455  0.0712

To determine significance, do I need to adjust the p-values of each outcome for multiple testing or can I report them as is? So, in this case would it be 4 comparisons (each treatment vs placebo)?

  • 1
    Are those planned contrasts (they appear to be) or is it a post-hoc decision to compare these treatments? – Roland Jul 30 '18 at 11:17
  • The hypothesis is that treatment A and B are better compared to a placebo treatment. For C and D I did not expect any significant differences. I am not interested in correlations between A and B for example.

    I don't exactly understand what you mean by "post-hoc decision"? As in, there was no prior hypothesis and I randomly compare groups?

    –  Jul 31 '18 at 09:05
  • 1
    https://stats.stackexchange.com/a/342709/11849 – Roland Jul 31 '18 at 09:11
  • Thank you for the link. In this case my approach is a planned comparison then, because I would have tested all 4 groups against placebo anyway, no matter what the data look like. Though I am still unsure how to deal with the resulting p-values. –  Jul 31 '18 at 09:31
  • Personally, I wouldn't do a correction. But is this even really an issue? The treatment effects are mostly non-significant anyway; only one interaction is slightly significant. At most I would draw the conclusion that treatments A and D would be worth an investigation with more power. – Roland Jul 31 '18 at 10:11
  • Well, there might be a case where a correction changes a significant result to non-significant. That is why I'd like to know whether there are any general guidelines on when and how to adjust p-values in these models. Couldn't I say in this case I conducted 4 repeated tests (each group against placebo) and therefore have to account for that (e. g. Bonferroni correction), or is my idea of how to interpret the results wrong? –  Jul 31 '18 at 12:37

0 Answers0