1

I have a time-series regression model where the output is always in the range of 6000-6050. After training my model, I get a Mean Absolute Error of around 18 and hence, very low Mean Absolute Percentage Error, since the denominators (actual value) are very high compared to the numerators (errors).

But in reality, being off by 18 when the guess is essentially between 0 and 50, is not that good. I get that subtracting an offset of 6000 fixes the problem in this case, but is there a metric that does this already? Specifically, an error metric with respect to the range of the variable. Or should I have scaled the target variable in the first place?

Apologies if this is commonly known, as I am new to the field.

dayyda
  • 11
  • In regards to using MAPE: https://stats.stackexchange.com/questions/299712/what-are-the-shortcomings-of-the-mean-absolute-percentage-error-mape – Ryan Volpi May 03 '22 at 13:29

0 Answers0