12

I am working on a research topic where I need to add together two AR processes and I was wondering if the distribution of these processes is of a recognizable form/structure. More formally, if $x_t$ is a AR(p) process with characteristic polynomial $\Phi_x(u)$ and $y_t$ is a AR(q) process with characteristic polynomial $\Phi_y(u)$, then what is the structure of $z_t=x_t+y_t$?

3 Answers3

15

This was studied by Granger and Morris (1976) who showed that

AR($p$) + AR($q$) = ARMA($p+q,\max(p,q)$).

Rob Hyndman
  • 56,782
  • 1
    I am reading through the paper but am stuck, do you think you can elaborate on these calculations? –  Oct 15 '13 at 21:57
  • 1
    Or rather is it obvious why this result holds? –  Oct 15 '13 at 22:12
  • Does this only hold for Gaussian noise? As far as I can tell, the innovations in the combined process when, say, $p=q=1$, are given by $e_z(t) = e_x(t) + e_y(t)$ and $e_z(t-1) = -\psi_{y,1} e_x(t-1) - \psi_{x,1}e_y(t-1)$ and I don't think we can write the second term as $\theta_{z,1} e_z(t-1)$ unless the innovations are Gaussian. – Confounded Jan 30 '20 at 17:15
  • Actually, I don't think it is possible even for Gaussian distribution to express the innovations as an $MA(1)$ process. We can, probably, create the same covariance structure, but it doesn't look to me that we can simply multiply $t-1$ innovation with a scalar to get the $MA(1)$ component. – Confounded Sep 08 '20 at 09:37
10

Rob Hyndman's answer isn't technically correct - Granger and Morris (1976) didn't show exactly that. In fact, for $AR(p)$ process $X_t$ and $AR(q)$ process $Y_t$,

\begin{align} \phi_X(B)X_t &= \epsilon_t\\ \phi_Y(B)Y_t &= \eta_t\\ \end{align} where $B$ is the backshift operator, we have \begin{align} Z_t &= X_t + Y_t\\ &= \phi_X^{-1}(B)\epsilon_t + \phi_Y^{-1}(B)\eta_t\\ \phi_X(B)\phi_Y(B)Z_t &= \phi_Y(B)\epsilon_t + \phi_X(B)\eta_t \end{align} The left hand side polynomial is of order $p+q$ and the right hand side has autocovariance zero at lags above $\max(p,q)$, thus in general $Z_t \sim ARMA(p+q,\max(p,q))$ - however as Granger and Morris (1976) point out this is not necessarily the case. In fact, strictly speaking we have \begin{align} AR(p)+AR(q) = ARMA(x,y),\text{ }\text{ }\text{ }x\leq p+q, y\leq\max(p,q) \end{align} For example (p. 249), in the case of repeated AR polynomial roots, \begin{align} (1-\alpha_1B)X_t &= \epsilon_t,\text{ }\text{ }\text{ }\text{i.e., }X_t\sim AR(1)\\ (1-\alpha_1B)(1-\alpha_2B)Y_t &= \eta_t,\text{ }\text{ }\text{ }\text{i.e., }Y_t\sim AR(2) \end{align} We have \begin{align} Z_t &= X_t + Y_t\\ (1-\alpha_1B)(1-\alpha_2B)Z_t &= (1-\alpha_2B)\epsilon_t + \eta_t \end{align} Then $Z_t\sim ARMA(2,1)$, i.e., $x<p+q$ and $y<\max(p,q)$.

Likewise (p. 249), on the MA side, for \begin{align} (1-\alpha B)X_t &= \epsilon_t,\text{ }\text{ }\text{ }\text{i.e., }X_t\sim AR(1)\\ (1+\alpha B)Y_t &= \eta_t,\text{ }\text{ }\text{ }\text{i.e., }Y_t\sim AR(1) \end{align} We have \begin{align} Z_t &= X_t + Y_t\\ (1-\alpha B)(1+\alpha B)Z_t &= (1+\alpha B)\epsilon_t + (1-\alpha B)\eta_t \end{align} Denote the right hand side \begin{align} \zeta_t = \epsilon_t + \alpha \epsilon_{t-1} + \eta_t -\alpha \eta_{t-1} \end{align} If $\text{var}(\epsilon)=\text{var}(\eta)$, we have \begin{align} E[\zeta_t \zeta_{t-k}]=0,\text{ }\text{ }\text{ }\forall k>0 \end{align} Then $Z_t \sim ARMA(2,0)$, i.e., $y<\max(p,q)$.

However as Granger and Morris (1976, p. 250) highlight, '[i]t would be highly coincidental if the "true" series [i.e., $X_t$] and observational error series [i.e., $Y_t$] obeyed models having common roots, apart possibly from the root unity, or that the paramaters of these models should be such that the cancelling out of terms produces a value of $y$ less than the maximum possible.'

2

As far as I can tell, the assertion that the sum of two $AR$ processes is an $ARMA$ process is based on the assumption that it belongs to $ARMA$ family and then the auto-covariance structure is matched.

Given two $AR(1)$ processes

$$(1 - a_xL)x(t) = e_x(t)$$ $$(1 - a_yL)y(t) = e_y(t)$$

their sum (assuming invertibility) is $$ z(t) = x(t) + y(t) = \frac{e_x(t)}{1 - a_xL} + \frac{e_y(t)}{1 - a_yL}$$

Multiplying by the $AR(1)$ polynomials, we get $$ (1 - a_xL)(1 - a_yL)z(t) = (1 - a_yL)e_x(t) + (1 - a_xL)e_y(t)$$ which is $$ (1 - (a_x + a_y)L + a_xa_yL^2)z(t) = e_x(t) + e_y(t) - \left(a_y e_x(t-1) + a_x e_y(t-1)\right)$$ So, the left-hand side is an $AR(2)$ process, but I don't see how we can express the right-hand side as an $MA(1)$ process, because if we define

$$ e_z(t) = e_x(t) + e_y(t)$$

then there is no way we can express the $t-1$ term on the right-hand side as a scalar multiple of $e_z(t-1)$, i.e.

$$a_y e_x(t-1) + a_x e_y(t-1) \neq be_z(t-1) = b\left(e_x(t-1) + e_y(t-1)\right)$$

for a scalar value $b$ (unless, of course, $a_x = a_y$).

We can construct an $ARMA(2,1)$ process which will have the same auto-covariance signature as the sum of two $AR(1)$ processes, but it won't be identical to the sum of the two processes.

Confounded
  • 543
  • 2
  • 13
  • Nicely shown. The auto-covariances of an ARIMA series do uniquely identify the process upto a scale factor ( there's a proof of this somewhere ), so yes, you can't obtain the same exact process as the sum of 2 AR(1)'s but you can in terms of identification of the process. I think that's what the mean by two processes being "equal". – mlofton May 10 '21 at 14:32
  • This is not correct. Loc cit shows, in fact as one of the first steps in the proof, that the sum of two MA process is again a MA. Your guess that the errors should be the sum of the two individual errors is just not correct. – user2520938 Apr 11 '23 at 18:56
  • @user2520938 Hi. Not sure that I fully understood your comment, but the discussion here is about the sum of ARMA processes, not MA. – Confounded Apr 17 '23 at 08:14
  • @Confounded You're writing that it is not clear how right hand side of one of your equations is again a MA process. I'm saying that it is in fact an MA process. – user2520938 Apr 17 '23 at 14:10
  • @user2520938 Can you then please show how it can be expressed in the form $z_t + a_1z_{t-1}$?, which is what I would call an MA(1)? Thank you – Confounded Apr 17 '23 at 23:56
  • @Confounded I suggest you read the article mentioned above, where on one of the first pages they show that this is in fact an MA. The proof is not constructive though. – user2520938 Apr 18 '23 at 06:30