Throughout my student life so far, I have always considered the mean squared error to be calculated by $ MSE=\frac{1}{n}\sum(Y_i-\hat{Y}_i)^2$. However I was looking at one of my statistics mod today and it was stated in the slide that
And that would mean that $ MSE=\frac{1}{n-2}\sum(Y_i-\hat{Y}_i)^2$ since $ SSE=\sum(Y_i-\hat{Y}_i)^2$.
Upon researching on this, I found this description on wikipedia:
mean squared error is sometimes used to refer to the unbiased estimate of error variance: the residual sum of squares divided by the number of degrees of freedom. This definition for a known, computed quantity differs from the above definition for the computed MSE of a predictor, in that a different denominator is used.
I would like to know if there is a correct definition or are the 2 MSEs here actually referring to completely different concepts? How do I go about understanding the reason for the difference?
