0

I am reading a paper in which following problem is posed : a k dimension multinormal vector $\theta$ and $n$ numbers of signals. Each of the signals (e.g. signal $k$) is given by $C_k'\theta + \epsilon_k$ - that is, each signal is some linear combination of $\theta$s plus a independent normal noise.

prior distribution of $\theta$ is given by $N(0, \Sigma)$, where $\Sigma$ is covariance matrix. Each noise associated with signal $k$ is distributed as $N(0, \sigma_k)$ and denote $C$ as coefficient matrix where $k$th row is $C_k'$ - coefficient vector for signal $k$.

Suppose that one chooses one signal to observe each time and after many times, suppose that she has observed signal $k$ for $q_k$ times (my understanding is each time, signal $k$ differs from last time's same signal as noise term is redrawn each time), the author claims without showing the proof that the posterior variance of $\theta$ is given by following:

$inverse(inverse(\Sigma) + C'QC)$ where $Q = diag(q_1, q_2 ...q_n)$

The author says "that $inverse(\Sigma)$ is prior precision matrix and $C'QC$ is the total precision from the observed signals. Thus equation simply represents the fact that for Gaussian prior and signals, the posterior precision matrix is the sum of the prior and signal precision matrices."

Can someone helps give a clear proof how the author's claim is true (or not true) ? Many Thanks

Sdeng
  • 1
  • I'm not absolutely sure but I think you mean that she observed the same signal $k$ $n$ different times. This is the only way that there can be $n$ $q_{i}$ terms along the diagonal and zero everywhere else. Assuming that's correct, then, for your question, you can just get an intro bayesian text and look up an example where the prior is normal and the likelihood is normal and the posterior distribution is then calculated. that will show you how that result is obtained. or google for it. That's a pretty standard bayesian setup so probably pretty easy to find. – mlofton Jan 01 '20 at 21:18
  • I looked around and didn't really succeed. page 28 of this link explains the derivation to some extent but if you want all of the gory details behind it, I bet they are in zellner's text. It's a classic bayesian text so most likely available in the library. In fact, any decent bayesian text ( econometrics bayesian text by lancaster is another good one ) should have it. https://www.bauer.uh.edu/rsusmel/phd/ec1-17_part-1.pdf – mlofton Jan 01 '20 at 21:38
  • Thank you for your suggestions ! – Sdeng Jan 13 '20 at 00:53

0 Answers0