2

I'm running a likelihood ratio test to check whether or not a condition has a significant outcome for subjects performing an experiment, and I'm using lmer in R to do this. So I run something like

> mod = lmer(outcome~ 1 + (1 | subject_id),data=df)
> mod2 = lmer(outcome~ 1 + (1 | subject_id) + condition,data=df)
> anova(mod,mod2)

Which results in something like

     npar     AIC     BIC logLik deviance  Chisq Df Pr(>Chisq)
mod     3 -6020.6 -6002.0 3013.3  -6026.6                     
mod2    4 -6019.5 -5994.7 3013.7  -6027.5 0.9067  1      0.0341

My question is whether the P-value I obtain this way is one-sided or two-sided.

  • 1
    Welcome to stats SE, although your question is not necessary a duplicate I think you will find your answer here https://stats.stackexchange.com/questions/22347/is-chi-squared-always-a-one-sided-test and here https://stats.stackexchange.com/questions/67543/why-do-we-use-a-one-tailed-test-f-test-in-analysis-of-variance-anova?noredirect=1&lq=1 – rep_ho Mar 01 '23 at 11:55

0 Answers0