1

Suppose I have a time series with mean $0$. If I was assuming an $MA(1)$ model, then my prediction at each time $t$ would be proportional to how "off" my last prediction was.

This feels strange to me.

Let's say my model was just $x_t\sim \varepsilon_{t-1}+\varepsilon_t$. I observe $x_1=1$, and predict $\hat{x}_2=1$. Next day I observe $x_2=.9$. Still better than average, but now my next prediction is $\hat{x}_3=-.1$ because that's the amount my previous prediction undershot. Which is an even more bizarre prediction than $\hat{x}_3=0$.

I understand that the point of this is to drop correlations between $\hat{x}_t,\hat{x}_{t+h}$ for $h$ sufficiently large ($h>1$ in this case), and therefore there has to be an "overcompensation" for the positive correlation between $\hat{x}_t,\hat{x}_{t+1}$ (which explains the $\hat{x}_3<0$ weirdness).

But does this have any practical benefit? It clearly must, because it's almost half the letters involved in ARIMA. What's a better way to understand why it should be involved in models?

Terence C
  • 163

0 Answers0