I have been trying to derive the posterior distribution in the case of weighted Bayesian regression in the case of multivariate normal distribution for a few days and have been stuck. I am not sure if this is due to lack of my linear algebra skills or if I have done something really wrong.
In my case, the distributional assumptions are:
$$ y = N (B x, W^{-1} \nabla) $$
The parameters $B$ have a normal prior.
$$ P(B) = N (B_0, \Sigma_{B}) $$
The conditional posterior distribution for $B$ is proportional to $P(y|B) P(B)$. This is proportional to:
$$ \exp \big((B - B_0)^T \Sigma (B - B_0) + (y - Bx)^T \nabla (y - Bx)\big) $$
I have seen the textbooks mention something called completing the square which as far as I can tell equates to matching like terms. Even though I could work out for scalar variance terms, I am having trouble figuring out how to deal with the precision matrices and how this can be decoupled.
[EDIT]:
So, I tried to make a run with Glen_b's suggestion and came this far:
Original expression without the exponential (I will simplify notation slightly):
$$ (B - B_0)^T \Sigma (B - B_0) + ( y - Bx)^T \nabla (y - Bx) $$ Expanding:
$$ B^T\Sigma B - 2 B_0^T \Sigma B + B_0^T \Sigma B_0 + y^T \nabla y - 2 y^T \nabla B x + (Bx)^T \nabla Bx $$
$$ = B^T(\Sigma + x^T \nabla x)B - 2 (B_0^T \Sigma B + y^T \nabla B x) + C $$
So I am not sure how to make further progress with the second term. I guess I have to make it in the form 2(some_term)B but not sure how that can be done.