I wasn't a statistician in 1990, so I can only speculate.
One issue with model checking is that it is admitting flaws with the Bayesian procedure, or at least our human ability to use it. If the prior really represents our prior information and we know the true model up to the values of our model, then the posterior should tell us the most rational response to seeing our data: there's nothing more that we as a scientist can contribute to the uncertainty. If we were correct about these assumptions, model checking would be nothing more than a waste of time.
So checking your model creates a philosophical can of worms. It's admitting that there may be fault somewhere in our assumptions. Does the prior really capture our prior knowledge? Is our inference valid if the data doesn't follow a very specific model that we somehow know to be true? Admitting the need for model checking can cast further doubt into our Bayesian analysis: even if we use a model checking procedure and our fit gets a pass, how do we know there wasn't deviation but our check wasn't powerful enough to catch it?
Treating model checking as "illegal" would be somewhat like sticking our heads in the sand rather than admit that a Bayesian analysis was not infallible. I don't know how much resistance there actually was to this in that era, but anytime you came across someone would insisted it was totally unnecessary, it must have been frustrating.