I am trying to understand the motivation behind Variational Bayes. I get that the posterior $p(z|x)$ can be intractable, when we would have to compute the evidence with $p(x) = \int p(x|z)p(z) \text{d}z$.
However, all tutorials I have read on Variational Bayes so far simply assume that the likelihood $p(x|z)$ is easy to compute. Why? Why does the same assumption not hold for the posterior?
If we can define $p(x|z)$ analytically with our model, why can't we do the same for $p(z|x)$?
Tutorials that I have read:
- A Beginner's Guide to Variational Methods
"We usually assume that we know how to compute functions on likelihood function $P(X|Z)$ and priors $P(Z)$." - Variational Inference
"The numerator is easy to compute for any configuration of the hidden variables. The problem is the denominator." - Why is exact inference in a Bayesian Network intractable?
"Now $p(x|z)$ is usually pretty easy to figure out (this is just the likelihood function and often analytically defined by your model)."