11

In this post, Andrew Gelman says:

Bayesian inference can make strong claims, and, without the safety valve of model checking, many of these claims will be ridiculous. To put it another way, particular Bayesian inferences are often clearly wrong, and I want a mechanism for identifying and dealing with these problems. I certainly don’t want to return to the circa-1990 status quo in Bayesian statistics, in which it was considered virtually illegal to check your model’s fit to data.

What is Andrew Gelman exactly referring to? What rationale would Bayesians give to consider model checking "illegal"? Isn't this view dogmatic and shortsighted, or are there scholars that still advocate it?

alberto
  • 3,056
  • 5
    I am certain that this very Gelman quotation has been discussed before on this site - off-hand I can't remember where, though perhaps the site search function can track it down. EDIT: try Why is a Bayesian not allowed to look at the residuals? where this exact quote is discussed in the comments, which is why it didn't show up on a site search. – Silverfish Jan 30 '15 at 19:04
  • 1
    (I also think Michael Chernick posted a first-hand anecdote about this somewhere, but this time I really can't remember where!) – Silverfish Jan 30 '15 at 19:10
  • :) I just found that very same link on google. Still, I don't think I get the point. I'd like to know how this 90's bayesian worked. E.g: what do you do if you do not check your model? You get your posterior and assume it's the truth? – alberto Jan 30 '15 at 19:21
  • Maybe just e-mail Andrew Gelman..? – Tim Feb 01 '15 at 19:36
  • 2
    First, I don't dare since I don't know how overflooded is his e-mail and the question is not life-or-death but rather intellectual curiosity (though it might help me to get a broader perspective on model checking). And second, I thought someone else might be interested in the answer. – alberto Feb 02 '15 at 13:17
  • 1
    Maybe this dogma stems from times when statisticians were regarded as either Bayesian or frequentist (and maybe it's still a bit, which's spouts these type of questions). Terms Bayesian and frequentist were related to a person rather than the analysis/problem at hand. The type of statistical analysis got regarded as a (subjective) decision depending on whether you are frequentist or Bayesian. I believe that this is nonsense to adhere to, and one should be able to mix it up. – Sextus Empiricus Jun 09 '20 at 09:03
  • 2
    But of course it is also bad to fiddle with your analysis a posteriori based on checks of the fit (which is actually true for both Bayesian and frequentist analysis). Maybe that is where the status quo may have stemmed from as well. Or maybe it is a combination. – Sextus Empiricus Jun 09 '20 at 09:05

2 Answers2

4

When I learned Bayesian statistics at that time, the alternative to model checking was model expansion/averaging. That is, a Bayesian was thought to (ideally) place prior probabilities on models, as well as on parameters within models. Incorporating data would give you posteriors over models, and models that were well-supported by the data would have higher posterior probability.

The attractive aspect of this was that nothing was needed except Bayes Theorem, and that model choice and parameter estimation worked the same way. It also fit well with the growing awareness that data-driven model selection was unstable.

The disadvantage was that MCMC over models was often a pain, and that specifying priors over large enough sets of models is harder than it sounds.

Thomas Lumley
  • 38,062
1

I wasn't a statistician in 1990, so I can only speculate.

One issue with model checking is that it is admitting flaws with the Bayesian procedure, or at least our human ability to use it. If the prior really represents our prior information and we know the true model up to the values of our model, then the posterior should tell us the most rational response to seeing our data: there's nothing more that we as a scientist can contribute to the uncertainty. If we were correct about these assumptions, model checking would be nothing more than a waste of time.

So checking your model creates a philosophical can of worms. It's admitting that there may be fault somewhere in our assumptions. Does the prior really capture our prior knowledge? Is our inference valid if the data doesn't follow a very specific model that we somehow know to be true? Admitting the need for model checking can cast further doubt into our Bayesian analysis: even if we use a model checking procedure and our fit gets a pass, how do we know there wasn't deviation but our check wasn't powerful enough to catch it?

Treating model checking as "illegal" would be somewhat like sticking our heads in the sand rather than admit that a Bayesian analysis was not infallible. I don't know how much resistance there actually was to this in that era, but anytime you came across someone would insisted it was totally unnecessary, it must have been frustrating.

Cliff AB
  • 20,980