I've recently started using LMM which possibly gives me better insight into my DV. Only I got some contradictory data regarding whether some variable is significant or not.
The Us and Hed variables are continuous variables, and the App and Category are both multicategorical (ordinal) data (set as factors in R)
My lmer function:
xxlmer <- lmer(Us ~ App + Hed + (1|Category), data = dataset)
Now I've noticed that lmer doesn't show the p-value. And I've read this is done for some good reasons. However, I would like to calculate these for use in my thesis.
First I found the following code which translated the t-values to p-values (For which I'm very grateful).
coefs <- data.frame(coef(summary(xxlmer )))
# use normal distribution to approximate p-value
coefs$p.z <- 2 * (1 - pnorm(abs(coefs$t.value)))
coefs
This gives succesfully gives me p-values, which look like this:
Estimate Std..Error t.value p.z
(Intercept) 2.9044048 0.49348777 5.8854646 3.969374e-09
App1 0.1600932 0.21344810 0.7500335 4.532345e-01
App3 0.3825582 0.20096127 1.9036414 5.695690e-02
Hed 0.3417938 0.09047678 3.7776961 1.582858e-04
However, when I subsequently use the 'xxlmer' in stargazer it also gives me p-values. These are way more conservative, making most insignificant (not even showing one star which is the equivalent of <0.1). I know there is a debate going on about calculating and using p-values, and it's deliberately been removed from the lmer function. But I've always assumed the difference wasn't so big.
Therefore, my question is: Which of the output can I trust and should I therefore use?
xxlmeris, but you can get p-values from lmer if you installlmerTestpackage. For simple designs these p-values are likely to be rather reliable. Your manual computation uses z-test instead of a t-test and this is unjustified unless you have a huge sample size. Don't do that. – amoeba Oct 23 '17 at 20:50rstan::stan_lmerwith the same formula, use some conservative priors, and you get real samples from the posterior distributions for each of the coefficients, you can translate those to credible intervals, 89% highest density intervals, whatever you wish, and be a Bayesian! – Gijs Oct 23 '17 at 20:57confint( modelName, method='boot')and directly get bootstrap confidence intervals. – usεr11852 Oct 23 '17 at 23:36help("pvalues"). Not being a purist and generally mainly producing p-values to satisfy coauthors and reviewers, I'm usually happy with using the lmerTest package, which masks thelmerfunction and itssummaryandanovamethods with a version that provides p-values. – Roland Oct 24 '17 at 06:05library(lmerTest)before fitting the model. – Roland Oct 24 '17 at 06:10lmeritself does not yield any p-values. As I said, you need to installlmerTestpackage, load it, and calllmerfrom that package. It adds p-values to the output. See the third answer in the thread you linked to. +1 to @Roland's comments. – amoeba Oct 24 '17 at 06:41rstan::stan_lmerrefers to the functionstan_lmerthat's part of therstanarmpackage. Package can be found here: https://cran.r-project.org/web/packages/rstanarm/index.html. – Gijs Oct 24 '17 at 08:32@Gijs I'm not sure yet what the advantage is of the method you explained but I've put it on my list to go and look up :),
– Mischa Oct 24 '17 at 19:49lmerTest? If so, did the p-values agree with whatever you got fromstargazer? I don't have any experience with of knowledge ofstargazer, that is why I am curious. – amoeba Oct 25 '17 at 08:31