-1

I was preparing for CFA and encountered this question, which is quite puzzling.

To use autoregressive model, it has to be covariance stationary (same mean, covariance). If a model's residual is not auto-correlated, then the model is well-specified(covariance stationary). However, random walk model's error term is uncorrelated, but it is NOT covariance stationary. This seems quite contradictory to me, and the textbook does not explain it clearly.

Anyone has any ideas how this thing works?

1 Answers1

0

There is a little bit of confusion here.

The AR(p) is weakly stationary by definition:

$$x_t=c+\phi_1 x_{t-1}+\varepsilon_t$$ $$\varepsilon_t\sim\mathcal{N}(0,\sigma)$$

with $|\phi_1| \le 1$.

Under these hypothesis you can prove that both mean, variance and autocovariance do not depend on time, hence it is a weakly stationary process.

So that was the theoretical part. Now there is the model estimation. If you fit an AR(p) to your data and that is the true data generating process, you should find (statistical evidence) that residuals are not autocorrelated (as well as rejecting hypothesis of unit root). If that's the case then you can use that model for prediction.

In my opinion the first sentence of your question is referred to this aspect.

Stationarity DOES NOT imply absence of autocorrelation

In fact AR(1) is a stationary but autocorrelated process. In the theoretical model the errors are $i.i.d.$

The random walk process is not stationary despite having $i.i.d$ errors.

see Autocorrelation vs Non-stationary

gioxc88
  • 1,200
  • The variance $\sigma^{2}{x{t}}$ manifestly depends on time in the random walk of your first equation. The variance of the pertubation $\varepsilon_{t}$ does not vary by time. You might want to edit "mean, variance and autocovariance do not depend on time" to address this. – Alexis Mar 06 '18 at 02:59
  • I am sorry @Alexis but my first equation is that of an AR(1), with $ |\phi_1| \leq 1$, hence I think what I wrote is correct. Am I missing something? – gioxc88 Mar 06 '18 at 07:41
  • If $\phi=1$ and $c=0$, then $\sigma^{2}{x{t}}=t$, which is a function of $t$. – Alexis Mar 06 '18 at 22:53
  • Yes of course I know, but I specified it is AR(1) and AR(1) cannot have $\phi_1 = 1$ by definition otherwise it would be a random walk. – gioxc88 Mar 07 '18 at 14:06
  • You did specify $|\phi| \le 1$... the "or equal to" part of the symbol "less than or equal to" means that 1 is an acceptable value. But this is moot: $\sigma^{2}{x{t}}$ is simply a function of $t$ for values of $\phi\ne0$. The fact that $x_{t}$ inherits some (<—$\phi\ne0$) variance from the previous time period means variance compounds over time. This is easy to see in simulation, even with near-zero values of $\phi$ (I am happy to share simulation code with you if you like). – Alexis Mar 07 '18 at 16:44
  • 2
    It is this fact: that $\sigma^{2}{x{t}}$ is a function of t while $\sigma^{2}_{\varepsilon}$ is not that makes differencing so useful in permitting us to make inference about change in x. Of course, there's more nuance to it, since $|\phi|<1$ means that variance eventually stabilizes for some value of $t$... but, per my original comment, this is precisely what I was asking for: that you make your assertions more explicit. – Alexis Mar 07 '18 at 16:57
  • Sorry, I understand your point. – gioxc88 Mar 07 '18 at 18:28