2

\begin{aligned} Y_t &= a Y_{t-1} + e_t, \\ Z_t &= Y_t + H_t, \\ \end{aligned}

where $H_t$ is independent of $Y_t$.

I'm trying to understand what ARMA model $Z_t$ corresponds to but I'm not really sure.

Can someone provide a quick explanation?

Richard Hardy
  • 67,272
  • @RichardHardy H_t is defined as being independent from Y_t and drawn from a normal distribution. Can you explain why it is AR(1)? Is it simply because it's only varying component is Y_t which is an AR(1) process? – tryingtolearn May 10 '17 at 12:15
  • Consider adding the [tag:self-study] tag and reading its Wiki. Then show us what you have done already and where you got stuck. – Richard Hardy May 10 '17 at 13:19
  • If i$H_t$ is independent of $e_t$ and has 0 mean then the process $Z_t$ it just an AR(1) process with a larger variance. But although you have said that $H_t$ is normally distributed, you have not mentioned independence and its relationship to $e_t$. – Michael R. Chernick May 10 '17 at 13:30
  • @MichaelChernick Hi Michael, it simply states that $H_t$ is a white noise process with variance $\sigma^2_h$ and specifies no relationship with the $e_t$. Could you clarify for me why it's an AR(1)? – tryingtolearn May 10 '17 at 13:36
  • If you only add two independent normal random variables each with zero mean you get another normal random variable with 0 mean and variance equal to the sum of the variances – Michael R. Chernick May 10 '17 at 13:41
  • So $Z_t = aY_{t-1} + b_t$ where $b_t$ is some variable that is a combination of $H_t$ and $e_t$ with zero mean and sum of their variances? – tryingtolearn May 10 '17 at 13:43

2 Answers2

2

We can see that $Y_t$ is an AR(1) process with $a$ parameter.

We can find the autocorrelation function of $Z_t$ by first calculating its autocovariance.

$$\text{cov}(Z_t, Z_{t-k}) = \text{cov}(Y_t + h_t, Y_{t-k} + H_{t-k})$$

this gives

$$\text{cov}(Z_t, Z_{t-k}) = \text{cov}(Y_t, Y_{t-k}) + \text{cov}(Y_t, H_{t-k}) + \text{cov}(H_t, Y_{t-k}) + \text{cov}(H_t, H_{t-k})$$

therefore when k = 0, we get $$ \gamma_Z(0) = \sigma^2_Y + \sigma^2_H $$ and when k > 0, we get $$\gamma_Z(k) = \gamma_Y(k) $$

From the equation for autocovariance for AR(1) models (since $Y_t$ is and AR(1)), $$ \gamma_Y(k) = a * \gamma_Y(0) \text{ and since } \gamma_Y(0) = \sigma^2_Y$$

this gives us an autocorrelation function of

$$\rho_Z(k) = \frac{a * \sigma^2_Y}{\sigma^2_Y + \sigma^2_H}$$

this has the form $$\rho_Z(k) = A a^{k-1}$$

which is typical of ARMA$(1,1)$ models and therefore implies that $Z_t$ is an ARMA$(1,1)$ model.

1

$Z_t$ is not described by an autoregressive model because

$$ Z_t = a Y_{t-1} + e_t + H_t $$

No lagged values of $Z_t$ are present in the right-hand-side, and so there is no auto-regression. The fact that the variable $Y_t$ appears lagged, makes the model dynamic, but not autoregressive also.

(I presume that $e_t$ is not assumed to be a function of lagged values of $Z$)

  • Well, you can re-arrange it to $Z_t = aZ_{t-1} + H_t - a H_{t-1} + e_t$, so I wouldn't say it's "not autoregressive". It's really just an AR(1) process observed with a bit of iid noise on top of it. It's not quite an AR(1) because the impact of the innovations ($e_t$) "carry forward" but the observation errors ($H_t$) do not. – Chris Haug May 10 '17 at 15:17
  • @ChrisHaug Granted, such reformulations are usually possible with dynamic models, but I believe "losing sight" of the $Y$ variable is detrimental. – Alecos Papadopoulos May 10 '17 at 15:35
  • I don't disagree. In fact, I think the original formulation in the question is actually the clearest one because it maps directly to the state space framework: the state is AR(1), and the observation is that plus some iid observation noise. – Chris Haug May 10 '17 at 15:52
  • I solved it but I didn't get the same answer. I will update my post with the solution. – tryingtolearn May 11 '17 at 12:24