If I run a linear mixed model with the lme() function and get results like these (comparing the score of 4 treatment groups against a placebo group):
> summary(model <- lme(score ~ treatment * session + score_baseline, data, random = ~ 1 + session | ID, na.action = na.exclude, method = "REML"))
Linear mixed-effects model fit by REML
Data: data
AIC BIC logLik
8356.717 8468.464 -4155.358
Random effects:
Formula: ~1 + session | ID
Structure: General positive-definite, Log-Cholesky parametrization
StdDev Corr
(Intercept) 19.711759 (Intr)
session 4.814042 -0.404
Residual 13.233045
Fixed effects: score ~ treatment * session + score_baseline
Value Std.Error DF t-value p-value
(Intercept) 8.646642 6.976798 800 1.239342 0.2421
treatmentA -13.230278 7.479517 152 -1.768868 0.0699
treatmentB 6.329995 7.532880 152 0.840315 0.5021
treatmentC -3.114359 7.494865 152 -0.415532 0.7011
treatmentD -12.449960 7.475326 152 -1.665474 0.0892
session 12.722259 0.878443 800 14.482734 0.0000
score_baseline 1.108290 0.049313 152 22.474487 0.0000
treatmentA:session 4.399646 1.727357 800 2.547039 0.0132
treatmentB:session -2.038450 1.727357 800 -1.180098 0.2913
treatmentC:session -0.091602 1.729462 800 -0.052966 0.9821
treatmentD:session 3.072027 1.727357 800 1.778455 0.0712
To determine significance, do I need to adjust the p-values of each outcome for multiple testing or can I report them as is? So, in this case would it be 4 comparisons (each treatment vs placebo)?
I don't exactly understand what you mean by "post-hoc decision"? As in, there was no prior hypothesis and I randomly compare groups?
– Jul 31 '18 at 09:05