The KL divergence for Bayesian is information gain or loss when you sample from some distribution rather than another, where one of them is the true distribution.
The problem is you never know the true distribution and the sampling distribution is not the same as the theoretical distribution.