I am reading a paper in which following problem is posed : a k dimension multinormal vector $\theta$ and $n$ numbers of signals. Each of the signals (e.g. signal $k$) is given by $C_k'\theta + \epsilon_k$ - that is, each signal is some linear combination of $\theta$s plus a independent normal noise.
prior distribution of $\theta$ is given by $N(0, \Sigma)$, where $\Sigma$ is covariance matrix. Each noise associated with signal $k$ is distributed as $N(0, \sigma_k)$ and denote $C$ as coefficient matrix where $k$th row is $C_k'$ - coefficient vector for signal $k$.
Suppose that one chooses one signal to observe each time and after many times, suppose that she has observed signal $k$ for $q_k$ times (my understanding is each time, signal $k$ differs from last time's same signal as noise term is redrawn each time), the author claims without showing the proof that the posterior variance of $\theta$ is given by following:
$inverse(inverse(\Sigma) + C'QC)$ where $Q = diag(q_1, q_2 ...q_n)$
The author says "that $inverse(\Sigma)$ is prior precision matrix and $C'QC$ is the total precision from the observed signals. Thus equation simply represents the fact that for Gaussian prior and signals, the posterior precision matrix is the sum of the prior and signal precision matrices."
Can someone helps give a clear proof how the author's claim is true (or not true) ? Many Thanks