I've been reading a few posts from distinguished members of this community about R^2 and time series forecasting:
1.What is the problem with using R-squared in time series models?
2.R-squared to compare forecasting techniques
However, I am wondering if the problems with using R^2 for time series model forecasting could be mitigated by splitting a TS into a training and testing set, and then computing the R^2 between the forecasted value and the true value. Would the problem with this method be that as you forecast further and further out, the correlation (of which R^2 is one measure of) between the forecasted and true values will be decreasing, so its not a great measure of the percent of variation explained? Or is there a better reason why this is not an advisable metric for TS performance?