0

The KL divergence for Bayesian is information gain or loss when you sample from some distribution rather than another, where one of them is the true distribution.

The problem is you never know the true distribution and the sampling distribution is not the same as the theoretical distribution.

  • 4
    KL is used in the context of MLE, so we don't consider KL in isolation. Earlier discussion might give some ideas. https://stats.stackexchange.com/questions/392372/why-does-the-bayesian-posterior-concentrate-around-the-minimiser-of-kl-divergenc – patagonicus Oct 08 '21 at 05:21
  • but Bayesians reject the MLE? –  Aug 05 '22 at 01:41
  • KL is used in the context of MLE, so we don't consider KL in isolation. Earlier discussion might give some ideas. (Mehmet Suzen) See https://stats.stackexchange.com/questions/392372/why-does-the-bayesian-posterior-concentrate-around-the-minimiser-of-kl-divergenc. –  Oct 20 '21 at 19:29

0 Answers0