1

I have already read How to derive variance-covariance matrix of coefficients in linear regression.

Assume we're working with the usual simple regression model, $\mathbf{Y} \in \mathbb{R}^N$, $X \in M_{N \times (p+1)}(\mathbb{R})$, $\boldsymbol{\beta} \in \mathbb{R}^{p+1}$.

So, we have $$\text{Var}[\boldsymbol{\hat{\beta}}] = \text{Var}\left[ (X^{T}X)^{-1}X^{T}\mathbf{Y}\right] = (X^{T}X)^{-1}X^{T}\text{Var}[\mathbf{Y}]\left[(X^{T}X)^{-1}X^{T}\right]^{T}\text{.}$$ Now, $$\mathbf{Y} = X\boldsymbol{\beta}+\boldsymbol{\epsilon}$$ so $$\text{Var}\left[\mathbf{Y} \right] = \text{Var}\left[\boldsymbol{\epsilon}\right] = \sigma^2I_{N \times N}\text{.}$$ Thus, $$\text{Var}[\boldsymbol{\hat{\beta}}] = (X^{T}X)^{-1}X^{T}\sigma^2I_{N \times N}X[(X^{T}X)^{-1}]^{T}$$ and since $X^{T}X$ is symmetric, it follows that its inverse is symmetric, so $$\text{Var}[\boldsymbol{\hat{\beta}}] = (X^{T}X)^{-1}X^{T}\sigma^2I_{N \times N}X(X^{T}X)^{-1}\text{.}$$ I'm not sure how to proceed from here.

Clarinetist
  • 4,977
  • $\sigma^2$ is a scalar, why don't you take it out in front? – JohnK Dec 10 '15 at 15:25
  • @JohnK Yeah, I've figured it out now. $I_{N \times N}$ goes away, you end up with $\sigma^2 (X^{T}X)^{-1}X^{T}X(X^{T}X)^{-1} = \sigma^2(X^{T}X)^{-1}$. – Clarinetist Dec 10 '15 at 15:26

1 Answers1

1

Easier than I thought. Any matrix times the identity gives the original matrix, so we have $$\begin{align} \text{Var}[\boldsymbol{\hat{\beta}}] &= (X^{T}X)^{-1}X^{T}\sigma^2I_{N \times N}X(X^{T}X)^{-1} \\ &= \sigma^2 (X^{T}X)^{-1}X^{T}I_{N \times N}X(X^{T}X)^{-1}\text{ since }\sigma^2\text{ is a constant} \\ &= \sigma^2 (X^{T}X)^{-1}X^{T}X(X^{T}X)^{-1}\\ &= \sigma^{2}(X^{T}X)^{-1}I_{(p+1)\times (p+1)} \\ &= \sigma^2(X^{T}X)^{-1}\text{.} \end{align}$$

Clarinetist
  • 4,977