0

Say I have a vector of values v, and I have an estimate of those values, say I call this vector e. In order to compute the goodness of the estimate, I compute mean([v-e]^2), which is MSE, and to get a reasonable error between 0 and 1, I divide this number by the difference max(v)-min(v). This looks intuitively correct, but I am wondering if this is statistically valid. I was thinking this would be what is called normalized MSE (NMSE), but my Google search for NMSE does not look like to agree that. Could someone tell me whether what I am doing makes sense, and what is its difference from NMSE?

user5054
  • 1,549
  • Why would you think dividing by the range of $v$ would cause your measure to lie within $(0, 1)$? It's easy to come up with counterexamples to this. – dsaxton Apr 01 '16 at 22:37
  • I agree that this measure may not always be in [0,1], but for my case, it is. My main point is that the result of such a normalization sounds more intuitive as an error, that's, .4 sounds better as an error rather than 400, when the range of my values in v is [0,1000]. – user5054 Apr 01 '16 at 22:47

0 Answers0