I am running a piecewise glmer and the variance estimates we are getting for the random effects are very large:
Random effects:
Groups Name Variance Std.Dev. Corr
id (Intercept) 5.947e-02 0.2439
T1 2.813e+03 53.0372 0.09
T2 2.817e+03 53.0762 -0.07 -1.00
Number of obs: 196, groups: id, 50
I have tried to narrow down the issue, and this only seems to occur when I include a specific covariate (variable c2). Without this covariate, the random effects seem more what I would expect for this model:
Random effects:
Groups Name Variance Std.Dev. Corr
id (Intercept) 0.6838 0.8269
T1 0.3065 0.5536 -0.76
T2 9.6504 3.1065 0.25 -0.82
Number of obs: 247, groups: id, 63
Keeping this covariate in is desirable as it is conceptually important and used in many similar models in existing literature.
This question was somewhat addressed in a similar question but it doesn't seem to be answered satisfactorily (in my opinion) as the answers pertain to the unit of measurement rather than the reason for such large variance estimates.
My model is:
model <-glmer(y ~ T1 + T2 +
# predictors
x1 + x2 + x3 +
# covariates
c1 +
c2 + # covariate affecting random effects
c3 +
c4 +
c5 +
(T1 + T2| id),
data = d, family="binomial" (link="logit"),
glmerControl(optimizer = "nlminbwrap"))
This issue is somewhat fixed by using nAGQ=0 in the model, though according to answers here it is not preferable to use this option if there is another way to fix the model. So, I would like to understand why this occurs, and only with the inclusion of a specific covariate.
The reproducible dataframe is here (values have been scaled otherwise warnings were thrown). I have been using lmerTest for analysis.
d <- read.table("https://pastebin.com/raw.php?i=mz4c7qBp")
colnames(d) <-c("id","y","T1","T2","x1","x2","x3","c1","c2","c3","c4","c5")
Any help is greatly appreciated!

