2

If we assume we are fitting the following regression model using ordinary least squares or maximum likelihood: $$ Y_i=X_i\hat{\beta} +\varepsilon_i \\ \varepsilon_i \sim N(0, \hat\sigma^2)$$

Computing $\hat\beta$, $\text{Var}(\hat\beta)$, and $\hat\sigma^2$ is straightforward using the usual formulas, but what are the formulas for $\text{Var}(\hat\sigma^2)$ and $\text{Cov}(\hat\beta, \hat\sigma^2)$? That is, what is the variance of the residual variance and what is its covariance with the coefficient estimates? The residual variance is an estimated quantity just like the coefficients, so I expect it to have a sampling variance and a sampling covariance with the model coefficients.

Noah
  • 33,180
  • 3
  • 47
  • 105
  • Are you assuming the normal error is correctly specified? – Ben Aug 15 '22 at 15:57
  • Do you need hats in the model specification? – dipetkov Aug 15 '22 at 16:19
  • @Ben, no. In OLS, there is no assumption made on the error, but its variance is estimated. For MLE, I am assuming one is performing a usual linear regression assuming normal errors. Whether it is correctly specified shouldn't have much to do with the estimated asymptotic covariance matrix (unless it does, in which case hopefully an answer will describe so). – Noah Aug 15 '22 at 17:12
  • @dipetkov Hopefully it is clear that I am describing estimating the parameters of the usual linear regression model. If you think the notation can be improved, please improve it. – Noah Aug 15 '22 at 17:13
  • @Noah OK, thanks. Could you state your modeling assumptions. Are the data iid? Is $\sigma^2<\infty$? Is the mean actually linear? etc – Ben Aug 15 '22 at 19:44
  • @Ben Assuming the usual conditions for the OLS standard errors to be valid. No assumptions on the functional form; just fitting a model and collecting its outputs. – Noah Aug 15 '22 at 20:33

0 Answers0