I'm running a generalized linear mixed models (GLMM) and am comparing the Laplace approximation to the Gauss-Hermite Quadrature. I have three predictor variables (fixed effects) and each is a number between $0$ and $1$. My response variable is either $0$ or $1$.
The Laplace approximation gives me a very large variance for my random effect (individual animal) with the following:
- Variance of $1822$
- SD of $42.69$
- Number of obs: $231$
- Groups: $195$
I have three fixed effects - the estimates for the Laplace model are numbers like, $-6.6, 3.9, -1.1$. For the GHQ model, it gives a random effect variance of $38.02$ $+/- 6.2$, and the fixed effect estimates are numbers like, $-9.6, 0.44, -2.77$.
Another important point is that, for both the Laplace and GHQ, for some models, the models fail to converge. I currently used the following code:
cntrl <- glmerControl(optimizer = "bobyqa", tol = 1e-4, optCtrl=list(maxfun=100000))
Is it 'bad' to have such a large variance for the individual random effect, as in the Laplace model? Does the fact that my predictor variable is a small number (bounded between $0-1$) make it even 'worse' to get a large result like this?
GLMMadaptiveas suggested in an answer is also a good idea) – Ben Bolker Sep 20 '23 at 23:54