3

In An Introduction to Empirical Bayes Data Analysis by George Casella (1985), it is given that \begin{align} x|\theta &\sim N(\theta,\sigma^2) \\ \theta &\sim N(\mu,\tau^2) \end{align} and since the Gaussian distribution is conjugate to itself, then the posterior distribution is $$ \theta|x \sim N\left(\frac{\theta\tau^2 + \mu\sigma^2}{\sigma^2+\tau^2},\frac{\sigma^2\tau^2}{\sigma^2 + \tau^2}\right) $$ However, it is also mentioned in the paper that $$ x \sim N(\mu,\sigma^2 + \tau^2) $$ I am not sure how $p(x)$ was derived here. Was this derived by re-arranging Bayes rule such that $$ p(x) = \frac{p(x|\theta)p(\theta)}{p(\theta|x)} $$

mhdadk
  • 4,940
  • 1
    The standard (and tedious) algebraic manipulations the author refers to rely on conjugacy of Gaussians, and also on completing the square in $\theta_i$. – microhaus Aug 02 '21 at 02:01
  • 1
    @microhaus thanks for the comment. I've edited my question to focus on a much smaller part of it. – mhdadk Aug 02 '21 at 09:38
  • Hi: if you can get your hands on arnold zellner's bayesian text, I'm pretty sure that all of the details will be contained in there. – mlofton Aug 02 '21 at 11:43

1 Answers1

3

The unconditional distribution of $x$ is also normal (you can see the full proof here) and what you need to do is finding its parameters. The Bayes theorem would solve it, but it isn't mandatory (and neither is the posterior distribution). You can have it simply by using the laws of total expectation and total variance:

$$E[x]=E[E[x|\theta]]=E[\theta]=\mu$$

$$Var(x)=E[Var(x|\theta)]+Var(E[x|\theta])=E[\sigma^2]+Var(\theta)=\sigma^2+\tau^2$$

Spätzle
  • 3,870