In a mixed model analysis (lme4 + lmerTest for R), I want to analyse the effect of 3 predictors, say A, B and C. Since it is a mixed model, there are two random effects Ran1 and Ran2.
I first built a random intercept model with Ran1 and Ran2, but without fixed terms:
mod.0 <- lmer ( outcome ~ 1 + (1|Ran1) + (1|Ran2), data = mydata)
The result (fixed part) is the following:
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
(Intercept) 2.92381 0.07787 35.28000 37.55 <2e-16 ***
I built a random intercept mixed effect model to account for Ran1 and Ran2:
mod.1 <- lmer ( outcome ~ A + B + C + (1|Ran1) + (1|Ran2), data = mydata)
The result (fixed part) is the following:
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
(Intercept) 3.255e+00 8.476e-02 5.000e+01 38.400 < 2e-16 ***
A -1.482e-01 2.639e-02 5.671e+04 -5.617 1.95e-08 ***
B 3.495e-01 2.462e-02 5.971e+04 14.195 < 2e-16 ***
C -2.083e-01 1.873e-02 3.942e+04 -11.124 < 2e-16 ***
With the following method to compute $R^2$ for models:
r2.mer <- function(m)
{
lmfit <- lm(model.response(model.frame(m)) ~ fitted(m))
summary(lmfit)$r.squared
}
mod.0 has $R^2=$ 0.6187513 and mod.1 has $R^2=$ 0.6251295. We can see that by adding the fixed terms, the model $R^2$ does not change much.
I also use a detailed $R^2$ computation method to compare the two model and to compare the marginal and conditional $R^2$ (https://github.com/jslefche/rsquared.glmer).
By running the following command:
rsquared.glmm(list(mod.0, mod.1,))
The result is the following:
Class Family Link Marginal Conditional AIC
1 merModLmerTest gaussian identity 0.00000000 0.5814522 300654.6
2 merModLmerTest gaussian identity 0.00555211 0.5691487 129177.1
The result is in line with the previous, i.e. for mod.1, the variances in the fixed terms only account for 0.00555 of the total variance (marginal $R^2)$.
As I said at the beginning, I am interested to analyse the effect of A, B, C. As you see, the effects are significant, although the effect size (Beta values) are small, due to large number of observations.
In this case, does it make sense to report that A and C have negative effects (Beta = -0.14 and -0.21), B has positive effect (Beta = 0.345), even the $R^2$ values of these fixed terms are really small? Do you have a better interpretation of the results?
Bseems to have a less-than-tiny effect size at least. I don't see any harm in reporting these effects if they're of interest. With a sample size this tremendous, you can get some pretty sharp confidence intervals on those estimates. That's gotta be worth something, right? – Nick Stauner Apr 02 '14 at 09:46REML=FALSE) when comparing the AIC of the two models. – Andrew M Dec 15 '14 at 23:53