I'm running some regression models in R using brms and lme4.
When I run a Bayesian model:
priors.mod.1 <- c(set_prior("normal(0, 1)", class = "Intercept"),
set_prior("normal(0, 1)", class = "b"),
set_prior("cauchy(0, 1)", class = "sd"),
set_prior("cauchy(0, 1)", class = "sigma"),
set_prior("dirichlet(1, 1)", class = "simo", coef = "motest.time.num1"),
set_prior("dirichlet(1, 1)", class = "simo", coef = "motest.time.num:training.dum1"),
set_prior("dirichlet(1, 1)", class = "simo", coef = "motest.time.num:Condition.dum1"),
set_prior("dirichlet(1, 1)", class = "simo", coef = "motest.time.num:training.dum:Condition.dum1"),
set_prior("lkj(2)", class = "cor"))
maximal.mod.ln <- brm(RT ~ mo(test.time.num) * training.dum * Condition.dum + (mo(test.time.num) * training.dum | Ppt.No),
family = gaussian(),
chain = 4,
iter = 2000,
warmup = 1000,
seed = 1234,
prior = priors.mod.1,
save_pars = save_pars(all = TRUE),
data = dat)
My population level estimates look like this:
However when I run a frequentist model
mod <- lmer(RT ~ test.time.num * training.dum * Condition.dum + (test.time.num * training.dum | Ppt.No),
data = dat)
they look like this:
Does anybody know why the estimates are so different (my random effect estimates are also wildly different in terms of scale)? My outcome variable (RT) is measured in milliseconds and so the the output from the frequentist models looks as if the coefficients are in the outcome variable units. I have tried a variety of prior distributions but the Bayesian results seem robust.
It looks as if the Bayesian coefficient estimates are standardised. And if I standardise my frequentist model coefficients, I get some similar looking values:
but I've looked online I don't think this is a default for brms models. Perhaps it is because I am modelling one of my variables as a monotonic effect within the Bayesian model? Could this cause my coefficient estimates to be standardised?
Any help or advance would be most appreciated!



iter = 2000andwarmup = 1000is quite a short markov chain. What happens if you setiter = 2 * 10^4andwarmup = 10^4. (2) Also, what is your sample size? It could just be the case that the prior is much stronger than the likelihood (because of small n?), thus the posterior is fairly close to the prior. – jcken Mar 30 '22 at 09:17