From course notes, I see that when working with a quantitative variable, we can standardize the sample mean to have a normal distribution (as per the central limit theorem) as long as the sample size is "large". As a result, the distribution of the sample means is normally distribution (whether we are working with $\sigma$ or s) as long as the sample size is "large":
$$\frac{\overline{X} - \mu}{\frac{\sigma}{\sqrt{n}}} \sim N(0,1),$$
$$\frac{\overline{X} - \mu}{\frac{s}{\sqrt{n}}} \sim N(0,1),$$
If the sample size is "small", then we will have a t-distribution:
$$\frac{\overline{X} - \mu}{\frac{s}{\sqrt{n}}} \sim t_{n-1}.$$
However, we recently started looking at inference for linear regression, and I see the following two equations:
$$\frac{\hat{\mu}_{y|x} - {\mu}_{y|x}} {\sigma{\sqrt{\frac{1}{n}+\frac{(x-\overline{x})^2}{\sum_i(x_i-\overline{x})^2 }}}} \sim N(0,1),$$
$$\frac{\hat{\mu}_{y|x} - {\mu}_{y|x}} {s{\sqrt{\frac{1}{n}+\frac{(x-\overline{x})^2}{\sum_i(x_i-\overline{x})^2 }}}} \sim t_{n-2}.$$
I am wondering if the second equation can be normally distributed if its sample size is "large". In other words, if we have a large sample size, then can we still use the central limit theorem and show that:
$$\frac{\hat{\mu}_{y|x} - {\mu}_{y|x}} {s{\sqrt{\frac{1}{n}+\frac{(x-\overline{x})^2}{\sum_i(x_i-\overline{x})^2 }}}} \sim N(0,1).$$
The course notes make it seem as though when working with $\hat{\mu}_{y|x}$ (a sample mean) we cannot use the central limit theorem like we can for $\overline{x}$ (a sample mean). In the case of linear regression, it seems that only $\sigma$ and s determine whether we have a normal distribution or t-distribution respectively.
Is this correct, and if so, why can't we apply the central limit theorem in the linear regression case?
In the case of large sample size you have equality $$\frac{\bar{X}-\mu}{\frac{\sigma}{\sqrt{n}}} \sim N(0,1)$$ and approximation $$\frac{\bar{X}-\mu}{\frac{s}{\sqrt{n}}} \sim t_{n-2} \underset{{n \to \infty}}{\sim} N(0,1)$$
You can do the same for the linear regression.
– Sextus Empiricus Oct 04 '17 at 11:21