I Use Python and the following definition of KL-Divergence
def kl_divergence(p, q):
return np.sum(np.where(p != 0, p * np.log(p / q), 0))
to calculate the divergence betweet two normal distributions:
x = np.linspace(norm.ppf(0.01, loc=0, scale=1), norm.ppf(0.99, loc=0, scale=1), 100)
a=norm.pdf(x, 0, 2)
b=norm.pdf(x, 2, 2)
kl_divergence(a, b)
The results depend on x and analytically, the result are wrong, because I used KL-Divergemnce for discrete distributions. I belief I could use these results for some practical purposes, but I need the real divergences. My question is, how can I implement KL-Divergence in python, such that it yields the analytically correct divergences? Does it work without integration to somehow transform the discrete results? If no, how can I integrate with numpy and scipy? I want to use it for the distribution that scipy has (normal, laplace,...) included.
scipy.integrate.quad(). In this case, keep in mind that $D_{KL}(p \parallel q) = H(p,q) - H(p)$ (the difference of the cross entropy and the entropy). In some cases, an analytical expression may be available for $H(p)$, so only $H(p,q)$ need be computed numerically. – user20160 Jun 27 '20 at 19:41p /= np.sum(p)– PatrickT Jun 14 '22 at 14:37