I need to do a naive time series forecasting as a benchmark. Therefore, I should split my dataset into train, validation, and test sets. If I divide the the dataset into train and test sets, the code will be as follows:
library(fpp2)
library(forecast)
data(ausbeer)
train <- window(ausbeer, start = c(1956,1), end = c(2007,4))
test <- window(ausbeer, start = c(2008,1))
naiveS <- snaive(train, h=1)
accuracy(naiveS, test)[,1:5]
How about if I want to use the data during 2005-2007 as my validation set? Could you please let me know how to compute the model's performance on the validation set?
I read this post about sliding/rolling window and expanding window and this page. As I know the tscv is based on expanding window.
e <- tsCV(ausbeer, snaive, h=1)
sqrt(mean(e^2, na.rm=TRUE))
When we use a naive forecasting method, how to obtain the performance on the train, validation, and test sets?
test. If you mean "intermediate evaluation set for tuning hyperparameters", then there is no such set, because the naive forecast does not have any hyperparameters to be tuned. – Stephan Kolassa Feb 04 '23 at 06:17