3

I know that with OLS we minimize the sum of squared residuals but does that imply that we maximize SSE? From the following r-squared formula $$ R^2 = \frac{SSE}{SST} = 1- \frac{SSR}{SST}$$ and the fact that SST = SSE + SSR, it really seems like it is true, but I am completely sure.

Popopo
  • 31

1 Answers1

2

$SST = \sum_i (y_i - \bar y)^2$ is a constant that doesn't depend on your model, so doesn't change anything for optimization. If you look at the $R^2$ definition

$$ R^2 = 1 - \frac{SSE}{SST} $$

you are left with $-SSE = - \sum_i (y_i - \hat y_i)^2$ that depends on the model. Maximizing the negative of the sum of squared errors is equivalent to minimizing squared error, which OLS does.

(Notice that I corrected the notation, you used $SSR$ in place of $SSE$ and vice versa.)

Tim
  • 138,066
  • 1
    I would have thought it was $R^2=1-\dfrac{SS_{\text{residuals}}}{SS_{\text{total}}}$ which is presumably where $SSR$ came from. I am not sure that residuals and errors are quite the same thing – Henry Jul 04 '21 at 20:53
  • 1
    There is also a key point that $\sum_i ( \hat y_i - \bar y)=0$ with OLS which may not apply in other models – Henry Jul 04 '21 at 20:55
  • @Henry the SSR, SST, SSE naming convention is more common, googling for those abbreviations would lead to many resources using them as above. – Tim Jul 04 '21 at 21:52