I was under the impression that the function lmer() in the lme4 package didn't produce p-values (see lmer, p-values and all that).
I've been using MCMC generated p values instead as per this question: Significant effect in lme4 mixed model and this question: Can't find p-values in the output from lmer() in the lm4 package in R.
Recently I tried a package called memisc and its getSummary.mer() to get the fixed effects of my model into a csv file. As if by magic, a column called p appears which matches my MCMC p-values extremely closely (and doesn't suffer the processing time that comes with using pvals.fnc()).
I've tentatively had a look at the code in getSummary.mer and have spotted the line that generates the p-value:
p <- (1 - pnorm(abs(smry@coefs[, 3]))) * 2
Does this mean p values can be generated directly from lmer's output rather than running pvals.fnc? I realise this will no doubt start the 'p-value fetishism' debate but I'm interested to know. I've not heard memisc mentioned before when it comes to lmer.
To be more succinct: What is the benefit (if any) of using MCMC p-values over those generated by getSummary.mer()?
getSummary.merfunction. The reported $p$-values should only be used as a quick check. If I recall, I actually only included the $p$-values to make it work within the framework provided bymemisc. But this should really be provided with an appropriate warning to the user, and I will contact the package maintainer to see about getting this added. My advice is to follow that provided by Doug Bates: MCMC is the safe bet (assuming others don't have better options). – Jason Morgan Sep 03 '13 at 00:55mcmcsamp()is not available because of a number of issues (One can check theStatus of mcmcsampsection in glmm.wikidot.com/faq for more details). I feel that at the moment time, probably (parametric?) bootstrapping is a viable -and not too hard to implement- alternative; thebootMer()functiom can be of service. – usεr11852 Sep 03 '13 at 03:10memiscare the p-values from treating the observed test statistics as Wald statistics (treating the t as a Wald z in this case). Such a test relies on the "large sample" assumption and so is more and more trustworthy as your sample sizes grow larger. The MCMC-based value, to my knowledge, does not rely on such an assumption. So anyway, reading a little about Wald tests and alternatives to them could help to shed further light on your question. – Jake Westfall Sep 03 '13 at 06:47