1

Background

I am running a mixed linear model with four fixed predictors and (most of) their interactions. I am quite new to these models and I am trying to understand how I should interpret my results and whether I am using the correct approach to generate them.

My Model

My model contains two factor variables: (1) Group_Variable with three levels: 'Group_1_H', 'Group_2_L','Group_3_T'; (2) Variable_C with two levels: 'level1_O','Level2_S'.

And it contains two continuous variables: Variable_A and Variable_B. The dependent variable is continuous.

My prediction is that there will be an interaction between Group_Variable and Variable_C, whereby Group_1_H will show have a significantly different score on the outcome variable based on whether they are in condition 'level1_O' compared to 'level2_S' relative to the other two groups.

Here is the model specification:

model <- my_outcome ~  lmerTest::lmer(Group_Variable * 
    Variable_A * Variable_C + Group_Variable * Variable_B * 
    Variable_C + (1 | ID_Variable), data = mydata)

Here is my output when I use the summary() function:

> summary(model)
Linear mixed model fit by REML. t-tests use Satterthwaite's method ['lmerModLmerTest']
Formula: Outcome_Variable ~ Group_Variable * Variable_A * Variable_C +      Group_Variable * Variable_B * Variable_C + (1 | Variable_ID)
   Data: crossvalidate

REML criterion at convergence: -2102.4

Scaled residuals: Min 1Q Median 3Q Max -7.2535 -0.4783 0.0113 0.5460 4.3982

Random effects: Groups Name Variance Std.Dev. Variable_ID (Intercept) 0.01559 0.1249
Residual 0.03115 0.1765
Number of obs: 3950, groups: Variable_ID, 88

Fixed effects: Estimate Std. Error df t value Pr(>|t|)
(Intercept) 1.939e-01 5.085e-02 1.181e+03 3.813 0.000144 *** Group_VariableGroup_2_L -9.593e-02 6.816e-02 1.140e+03 -1.407 0.159566
Group_VariableGroup_3_T -5.413e-02 6.571e-02 1.017e+03 -0.824 0.410242
Variable_A -1.011e-02 1.155e-02 3.851e+03 -0.875 0.381359
Variable_CLevel2_S -4.811e-02 5.423e-02 3.853e+03 -0.887 0.375013
Variable_B 8.054e-02 8.273e-03 3.862e+03 9.736 < 2e-16 *** Group_VariableGroup_2_L:Variable_A 2.542e-02 1.542e-02 3.851e+03 1.648 0.099371 .
Group_VariableGroup_3_T:Variable_A 3.246e-02 1.493e-02 3.851e+03 2.175 0.029705 *
Group_VariableGroup_2_L:Variable_CLevel2_S 4.883e-02 7.282e-02 3.852e+03 0.671 0.502523
Group_VariableGroup_3_T:Variable_CLevel2_S 2.971e-02 7.060e-02 3.851e+03 0.421 0.673929
Variable_A:Variable_CLevel2_S 3.740e-02 1.397e-02 3.850e+03 2.678 0.007441 ** Group_VariableGroup_2_L:Variable_B 1.457e-02 1.119e-02 3.860e+03 1.303 0.192717
Group_VariableGroup_3_T:Variable_B 2.173e-02 1.092e-02 3.860e+03 1.990 0.046710 *
Variable_CLevel2_S:Variable_B 1.579e-02 1.005e-02 3.856e+03 1.570 0.116423
Group_VariableGroup_2_L:Variable_A:Variable_CLevel2_S -2.434e-02 1.875e-02 3.850e+03 -1.298 0.194355
Group_VariableGroup_3_T:Variable_A:Variable_CLevel2_S -4.374e-02 1.836e-02 3.850e+03 -2.382 0.017255 *
Group_VariableGroup_2_L:Variable_CLevel2_S:Variable_B -1.081e-02 1.363e-02 3.855e+03 -0.793 0.427704
Group_VariableGroup_3_T:Variable_CLevel2_S:Variable_B 7.997e-03 1.344e-02 3.854e+03 0.595 0.551886


Signif. codes: 0 ‘*’ 0.001 ‘’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

And here is the output when I use the Anova function from the car package:

> car::Anova(model)
Analysis of Deviance Table (Type II Wald chisquare tests)

Response: Outcome_Variable Chisq Df Pr(>Chisq)
Group_Variable 6.3807 2 0.041157 *
Variable_A 31.3062 1 2.204e-08 *** Variable_C 110.3747 1 < 2.2e-16 *** Variable_B 1553.9462 1 < 2.2e-16 *** Group_Variable:Variable_A 1.0300 2 0.597500
Group_Variable:Variable_C 26.7630 2 1.543e-06 *** Variable_A:Variable_C 2.8659 1 0.090477 .
Group_Variable:Variable_B 18.5980 2 9.152e-05 *** Variable_C:Variable_B 7.7037 1 0.005511 ** Group_Variable:Variable_A:Variable_C 5.6792 2 0.058450 .
Group_Variable:Variable_C:Variable_B 2.1613 2 0.339370


Signif. codes: 0 ‘*’ 0.001 ‘’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

And the post hoc pair-wise contrasts:

Group_Variable_pairwise Variable_C_pairwise estimate SE df t.ratio p.value
Group_3_T - Group_1_H level1_O - Level2_S 0.0767201 0.0156317 3889.770 4.907988 0.0000029
Group_3_T - Group_2_L level1_O - Level2_S 0.0156633 0.0148981 3889.107 1.051362 0.5444832
Group_1_H - Group_2_L level1_O - Level2_S -0.0610568 0.0163169 3898.048 -3.741947 0.0005430

Questions

I have some questions based on this.

  1. Is the Anova() function appropriate here? From what I have read online, people use the Anova function to compare models to find out whether the addition of a variable of interest significantly improves the model fit (e.g. Model interpretation in R (anova vs summary output)). Can I also use it just to summarise the model I have specified and interpret the results?

  2. Why are the p values for the interaction term (highlighted in both screenshots) different between the summary and Anova outputs? I have been trying to read about this online, and from what I understand the Anova is an omnibus test (i.e. compares several parameters at once) whereas the summary is step-wise. However, I would still expect there to be a significant result for 'Group_VariableGroup_1_H:Variable_CLevel2_S' in the summary as this is (I think?) essentially the same comparison as group 3 vs group 1 in the post hoc comparison table.

Zcjth84
  • 23
  • 2
    Please type your question as text, do not just post a photograph or screenshot (see here). When you retype the question, add the [tag:self-study] tag & read its wiki. Then tell us what you understand thus far, what you've tried & where you're stuck. We'll provide hints to help you get unstuck. Please make these changes as just posting your homework & hoping someone will do it for you is grounds for closing. – kjetil b halvorsen Oct 30 '21 at 13:45
  • 1
    Thanks for your feedback to improve my question. I have now replaced the screenshots with code. The self study tag is not appropriate here as this is not a homework question, it is related to my research. – Zcjth84 Nov 01 '21 at 09:37

0 Answers0