I'm studying regression analysis but I'm struggling with really understanding how degrees of freedom are calculated. For example, if we have the simple scenario where $Y_i=\beta_0+\beta_1 X_i + \epsilon_i$ (and all the standard assumptions hold) then I read
$\frac{1}{\sigma^2} \sum_{i=1}^n (\hat{Y}_i - \bar{Y})^2 \sim \chi^2_{1}$
This seems reasonable when you make an argument like "$\hat{Y}_i$ has two parameters and so two degrees of freedom but $\bar{Y}$ takes one degree of freedom and so you're left with 1", but I guess I'm looking for an argument that's more theoretically grounded. Why does that summation have the same distribution as a standard normal squared?
I was able to understand why $\frac{1}{\sigma^2} \sum_{i=1}^n (Y_i-\bar{Y})^2 \sim \chi^2_{n-1}$ by considering the sum of squares as a projection of the $\epsilon_i$ onto a space of dimension $n-1$. A proof for the above case that follows that kind of argument would be fantastic!