1

The variance of the coefficients from a multiple linear regression are:

$$ Var(\beta_k|X) = \sigma^2(X'X)^{-1} $$

The single linear regression formula seems fairly intuitive with the denominator being the sum of squares for the independent variable, but I don't understand how the independent variables influence the variance in this formulation.

User1865345
  • 8,202
Geoff
  • 601
  • $X^\prime X$ is closely related to (and determines) the Mahalanobis Distance, which according to my visual explanation at https://stats.stackexchange.com/a/62147/919 can be understood in terms of a linear transformation of the column space of $X.$ As a generalization of the one-dimensional situation, that transformation determines the variance matrix of the parameter estimates in a clear geometric sense that is equally "fairly obvious." – whuber Aug 09 '23 at 16:18
  • Sorry, I'm probably missing something here, but I don't really see the link between the Mahalanobis Distance and this? – Geoff Aug 09 '23 at 17:06
  • The Mahalanobis distance reduces the question to a collection of separate one-dimensional situations that are identical to the one you take to be "fairly intuitive." – whuber Aug 09 '23 at 17:33
  • I've tried reading up on the Mahalanobis distance and I'm really quite lost now. I'm unsure why (X'X)^-1 - I see that this is the covariance matrix with the mean=0, but why would the formula for the variance of the ols estimator lead to this specific result expression where mean=0, which is unlike the single dimension case - would be the mult-dimensional analogue of diving by sum(x-xbar)^2? The Mahalanobis distance transforms the axis so that they transform the axis so that they are measuring covariance between the two variables? I'm not sure why these are linked? – Geoff Aug 10 '23 at 11:33
  • Those questions are best answered by a study of linear algebra, with a special focus on the geometric interpretations of vectors and matrices and their operations. – whuber Aug 10 '23 at 13:24

0 Answers0