My question is about t-test values for linear regression coefficients using OLS and MLE. There are a few related posts on this website (here, and here ), and I could not find the exact answer.
The following are a few starting points to set the context:
(1) We know that OLS and MLE will generate the same estimates for regression coefficients(e.g., $\hat{\beta}_0$, $\hat{\beta}_1$).
(2) We know that linear regression coefficients estimated by OLS do not assume normal distribution for $\epsilon$. Thus, if we want to get a p-value for estimated coefficients, typically we assume normal distribution for $\epsilon$, same as MLE.
(3) Below is from the textbook of Bain and Engelhardt's Intro. to Prob. and Math. Stat. (p. 514). The $T_0$ and $T_1$ are for intercept and slope t tests in MLE for simple linear regression. We can see that it cancels out the population $\sigma^2$ and only keeps the unbiased estimate $\tilde{\sigma}$ in the t-statistic. We know that the estimate of $\sigma^2$ in OLS is also unbiased $\tilde{\sigma}$.
My question is: Are t-statistic formulas and values for linear regression coefficients the same across OLS and MLE?
I believe so. If it is not true, can you point out where I make mistakes in this reasoning process? Thank you so much. I look forward to your insights, suggestions, and comments.
(Note that, let's just assume $\sigma^2$ is unkown, and we will just use t-test rather than standard normal, to limit the scope of the discussion here. I asked a related question here, but not the same question. I did not have space in that question to discuss this, and thus I am asking it as a new question. Plus, this is a different question. You might point out that there are other statistics (not just t-test) to test regression coefficients in MLE. However, let's just stick to t-test in this question, to limit the scope of this question. Thank you so much.)
Added comments - Part 1:
(1) You might have questions regarding how to derive estimated $\sigma^2$ in MLE. If so, please refer to this PDF from Ryan Adams. The short answer is that, in MLE, $\hat{\sigma}^2 =\frac{\sum_{i=1}^n (y_i – \hat{\beta}_0-\hat{\beta}_1 x_i)^2}{n}$ is a biased estimate of $\sigma^2$. However, note that, it does not really matter here though, since the t-test shown above uses unbiased $\tilde{\sigma}$.
(2) I took a photo on the section of $\sigma^2$ in OLS (p.502). It provides the context of how $\sigma^2$ links with $\tilde{\sigma}^2$.
Added comments - Part 2 Added on July 21, 2023:
The picture above provides how $\tilde{\sigma}^2$ is derived in OLS. There are questions regarding how $\tilde{\sigma}^2$ is derived in MLE then.
Note that, as mentioned in the picture above, $\tilde{\sigma}^2$ is the same across OLS and MLE. Below are Theorem 15.3.4, 5, 6 about this from Bain and Engelhardt's Intro. to Prob. and Math. Stat. (p. 510-514).





However, in MLE, unbiased estimate of $\sigma^2$ is also $\tilde{\sigma}^2$. Thus, the $V \sim \chi^2(n-2)$ shown in the pic. is independent from the fact that $\tilde{\sigma}^2$ is from OLS or MLE, since $\tilde{\sigma}^2$ is the same across OLS and MLE.
– Will Jul 19 '23 at 18:49