I am revisiting some basic concepts involving t-tests and ANOVAs, and got tripped up early. I wanted to apply the concept of lack-of-fit sum of squares to the single sample t-test but wonder how this can be treated as a ordinary LS problem, if at all. In linear least squares there are adjustable fitting parameters and a sum of squared errors that is minimized. One condition that is apparently satisfied as a result of the fit is $$\sum_i \epsilon_i =\sum_i (y_i-\hat{y_i})=0 $$ where $\epsilon_i$ is the error associated with the difference between the response variable $y_i$ and the model $\hat{y_i}$. This then allows some terms to be set to zero when partitioning the SSE into pure error and regression terms, which are then used to compute the ratio on which a ratio test (a t or more generally an F-test) is based. At least that's how I understand the connection between these concepts (outlined for instance in this Wikipedia page).
However in a single or two-sample t-test there are no adjustable parameters at all, since we stipulate a rigid model (say that the population mean or difference of means equals a fixed value). How to show that the sum of errors equals 0, justifying the partition of summed squares? This seems essential to showing the connection with least-squares, or perhaps it isn't? Maybe a preliminary question is, what if anything is being fit in a one-sample t-test?
I realize there are related questions and I am going through some of these but am guessing my question differs significantly.