0

I have a question regarding a mixed model I am using: In a study, participants have been presented with 40 different news article headlines and indicated for each headline whether they would share the headline or not (Yes coded as 1, No coded as 0). There are the two binary within-subjects factors “Accuracy” (true vs. false) and “Strategy” (attacks outgroup vs. praises ingroup). Further, there is a binary between-subjects factor “Condition” (threat vs. neutral).

I wanted to run a generalized mixed model with crossed random effects for participants (id) and headlines (Headline) that includes sharing decision as a dependent variable and Accuracy, Strategy and Condition as independent variables. I have the following issue with that:

When I try to use a multilevel logistic regression with the following command, I am running into convergence issues:

mreg_P3_g <- glmer(
   Sharing_P3 ~ (1 | id) + (1 | Headline) + Strategy * Accuracy * Condition, 
   data=df,
   family="binomial"
)

I receive the following warning message:

Warning message:
In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  :
  Model failed to converge with max|grad| = 0.00577471 (tol = 0.002, component 1)

I have looked at some posts with similar issues but haven't encountered this warning message so far. It would be great if I could get some help with interpreting the warning message and the limitations it imposes, and perhaps some advice on how to handle the issue (I am far from an expert).

Thank you so much in advance!

  • increase maxIterations –  Aug 04 '22 at 13:24
  • Possible duplicate: https://stats.stackexchange.com/questions/489580/model-not-singular-but-doesnt-converge-what-could-be-the-reason-lme4-in-r – mkt Aug 04 '22 at 13:27
  • You could plot the results based on various optimizers. If the results are consistent across optimizers, they would be more trustworthy. A function and a code-through is available at: https://pablobernabeu.github.io/2021/a-new-function-to-plot-convergence-diagnostics-from-lme4-allfit/ – Pablo Bernabeu Jun 24 '23 at 11:12

1 Answers1

1

It's covered in this FAQ:

Most of the current advice about troubleshooting lme4 convergence problems can be found in the help page ?convergence. That page explains that the convergence tests in the current version of lme4 (1.1-11, February 2016) generate lots of false positives. We are considering raising the gradient warning threshold to 0.01 in future releases of lme4. In addition to the general troubleshooting tips above:

  • double-check the Hessian calculation with the more expensive Richardson extrapolation method (see examples)
  • restart the fit from the apparent optimum, or from a point perturbed slightly away from the optimum (getME(model,c("theta","beta")) should retrieve the parameters in a form suitable to be used as the start parameter)
  • a common error is to specify an offset to a log-link model as a raw searching-effort value, i.e. offset(effort) rather than offset(log(effort)). While the intention is to fit a model where counts $\propto$ effort, specifying offset(effort) leads to a model where counts $\propto$ exp(effort) instead; exp(effort) is often a huge (and model-destabilizing) number.

The usual suspect is often consistent with the folk theorem of computational statistics: When you have computational problems, often there’s a problem with your model, so first check if your model really makes sense for the data.

Tim
  • 138,066