2

I am conducting a GLMM with a random slope effect and would like to know if this random slope effect is significant or no. To do this, i did two things:

  • First, compare the full model with the random slope with reduced models using likelihood ratio test. The reduced model includes a model that excludes the random effect and a model that includes it as a random intercept.

Here is my code:

m1<-glmmTMB(fitness~Year+Plant*type+(1|Site.M)+(1|Site.F)+(type|Tree.ID),
              weights=No.OV, family=binomial,data=OV.sum1)

m2<-glmmTMB(fitness~Year+Plant*type+(1|Site.M)+(1|Site.F), weights=No.OV, family=binomial,data=OV.sum1)

m3<-glmmTMB(fitness~Year+Plant*type+(1|Site.M)+(1|Site.F)+(1|Tree.ID), weights=No.OV, family=binomial,data=OV.sum1)

anova(m2,m3,m1)

  • Second, I also calculated the marginal and conditional variance for each model using function tab_model().

The anova result shows that model 1 is significantly better than model 2 and 3:

 Df    AIC    BIC    logLik  deviance   Chisq Chi  Df  Pr(>Chisq)

m2 6643.3 6676.8 -3312.7 6625.3

m3 4786.4 4823.6 -2383.2 4766.4 1858.89 1 < 2.2e-16 ***

m1 4350.9 4406.7 -2160.5 4320.9 445.53 5 < 2.2e-16 ***

However, for the conditional variance explained by the model, model 3 has the highest conditional variance:

          Marginal R2   Conditonal R2
model 3:   0.053            0.279
model 1:   0.045            0.251

I couldn't understand this and wondering if model 1 with a random slope is still the best model. Could anyone help with this?

Fanfoué
  • 631
  • 5
  • 16
  • 1
    The last table shows that model 3 has higher "marginal $R^2$" than model 1, but lower "conditional $R^2$," with the latter including the variance explained by the random effect. Please say more about your concerns about choosing model 1 as "the best model." – EdM Aug 16 '20 at 13:29
  • Sorry, i mixed up the table result. See the new edits. So model 3 has higher conditional R2 than model 1, but model 1 has lowest AIC BIC. That's why i am confused and not sure how to articulate model 1 is the best model. – Linyi Zhang Aug 17 '20 at 14:08

1 Answers1

1

I would be reluctant to put too much weight on the marginal and conditional $R^2$ values, particularly with a logistic regression. See this answer for the reasons why. The chi-square test based on deviance would seem to be the most reliable choice.

EdM
  • 92,183
  • 10
  • 92
  • 267
  • Is the gist that $R^2$ deals with square loss while logistic regression aims to minimize log loss? – Dave Mar 17 '22 at 19:41
  • @Dave I think the bigger problem is just what $R^2$ means for a model fit by (RE)ML. Or perhaps put more precisely, the multiple things it might mean. It isn't a square loss per se for such models. It's an attempt to find a measure for such models that bears a similar relationship to likelihood as the square loss in an OLS model captured by $R^2$ bears to likelihood. The page to which I link, and the links from that page, provide much more expert commentary than I can. – EdM Mar 17 '22 at 19:53