1

I have the following assignment to solve but I'm not sure if I solved it correctly.

Questions

Let the stochastic process $(Y_t)_t$ be defined by $Y_t = \mu + Y_{t-1} + \varepsilon _t$ with $(\varepsilon _t)_t\sim \mathrm{WN}(0,1)$.

a) Calculate the expected value and the variance of $(\Delta Y_t)_t$.

b) Prove that $(\Delta Y_t)_t\sim \mathrm{MA}(1)$ and calculate the autocovariance function of $(\Delta Y_t)_t$.

My solutions

a) \begin{eqnarray} Y_{t+1} &=& \mu + Y_t + \varepsilon_{t+1}\\[1ex] \implies \Delta Y_t = Y_{t+1}-Y_t &=& \mu +\varepsilon_{t+1}\\[1ex] \mathrm E(\Delta Y_t ) &=& \mu + \mathrm E(\varepsilon_{t+1}) = \mu \hspace{6cm} \\[1ex] \mathrm {Var}(\Delta Y_t) &=& \mathrm{Var}(\varepsilon _{t+1}) = 1 \end{eqnarray}

b) \begin{eqnarray} (\Delta Y_t)_t &=& \mu + \varepsilon_{t+1} + 0\cdot \varepsilon_t \\[1ex] \implies \mathrm{ACV} &=& \mathrm{Cov}(\Delta Y_t, \Delta Y_{t-h})\\ &=& \mathrm{Cov}(\varepsilon_{t+1}, \varepsilon_{t-h+1}) = \left\{\begin{array}{ll} 1 & h=0 \\ 0 & \text{otherwise} \end{array}\right. \end{eqnarray}

What do you think?

ahorn
  • 1,230
  • 11
  • 34
rdvl0
  • 23
  • 3
  • Please read our FAQ! "Do not merely post a scan or image of the whole question, nor of your attempted answer." – Giskard Jun 29 '18 at 08:29
  • I'm sorry! I tried to do it but it wouldn't show my math formulas correctly. – rdvl0 Jun 29 '18 at 08:41
  • Hi: your acf is incorrect. write the stochastic process for $Y_{t-1}$ and then subtract it from the process for $Y_t$ and you will see that you end up with two epsilon terms which makes the acf at lag one non zero which implies that the process is MA(1). . – mark leeds Jun 29 '18 at 08:49
  • 3
    @rdvl0 try this for fomatting your question https://math.meta.stackexchange.com/questions/5020/mathjax-basic-tutorial-and-quick-reference – EconJohn Jun 30 '18 at 00:05
  • @denesp learning TeX isn't simple. Seeing as I think this question fits the 'requirements' of showing effort, I think the best thing for us to do is to edit the TeX ourselves. https://stackoverflow.blog/2018/04/26/stack-overflow-isnt-very-welcoming-its-time-for-that-to-change/ – ahorn Jul 02 '18 at 13:26
  • @markleeds $Y_t - Y_{t-1} = \mu + \varepsilon _t$. I'm not getting how this is MA(1). – ahorn Jul 02 '18 at 14:29
  • @ahorn You are welcome to take part in the debate at the meta site and there you can try to change current consensus. – Giskard Jul 02 '18 at 15:11
  • @ahorn: forgert my previous comment.. page 19 and 29 of the link below provides the properties of the process. Based on that, it might be possible to show what you want to show. I'm gonna try it when I have more time. Right now, I don't see it. It's always possible that the thing you're being asked to show is not true. I hope link here helps. http://www.personal.psu.edu/asb17/old/sta4853/files/sta4853-2.pdf – mark leeds Jul 03 '18 at 03:36
  • @ahorn: I only glanced but I'm pretty certain that page 5 of this shows it. http://www2.econ.iastate.edu/classes/econ672/falk/_notes/lecture_13_modeling_trends.pdf – mark leeds Jul 03 '18 at 03:44
  • @rdvl0 see mark leeds's comments. I'm not going to bother (to put it bluntly ;) ). – ahorn Jul 03 '18 at 06:11
  • @ahorn: I mistakenly thought that you had asked the question. my apologies. – mark leeds Jul 03 '18 at 12:31
  • It seems that you should be able to give the covariance as simply $0$ for $h \neq0$. Including the value for $h=0$ is kinda redundant, since of course it's $1$. But of course your instructor's preferences take precedence. – Acccumulation Apr 30 '19 at 22:14

1 Answers1

1

He who formulated the exercise is wrong, and this is why I am posting a full answer to a homework question. This is a classic example where manipulating recursive relations can lead to different representations that may appear "different" and with different properties.

$$\Delta Y_t \equiv Y_{t}-Y_{t-1} = \mu + Y_{t-1} + \varepsilon_{t} - \mu - Y_{t-2} - \varepsilon_{t-1}$$

$$\implies \Delta Y_t = \Delta Y_{t-1} - \Delta \varepsilon_{t} \tag{1}$$

At the same time

$$Y_t = \mu + Y_{t-1} + \varepsilon _t \implies \Delta Y_t = \mu +\varepsilon _t \tag{2}$$

The right hand side of Eqs. $(1)$ and $(2)$ represent the same process.

In any case, neither of these are $MA(1)$.

Moving forward, which one to choose?

A soft adoption of Occam's razor indicates that eq. $(2)$ is the simplest. A bit more specifically, we note that the manipulation resulting in $(1)$ hasn't "saved" us from the existence of a unit root and non-stationarity in a clear way.

Both therefore suggest to adopt $(2)$, that says that $\{\Delta Y_t\}_t$ is the sum of constant and a $WN$ process.

Alecos Papadopoulos
  • 33,814
  • 1
  • 48
  • 116
  • @markleeds Page 5-6 of the link you mention discusses a model with a deterministic trend. – Alecos Papadopoulos Jul 04 '18 at 18:49
  • @alex papadopoulos: Sorry, Alex. It's page 3. My take on it is that if you difference the expectations at time (t+s) and time (t+s-1), then the first difference of these expectations will be e_(t+s) - e_(t+s-1) which is an MA(1) with $\theta = -1$. That's the only thing I can think of as far as why the question asks to show it. But I agree that it's pretty weird to look at it that way. – mark leeds Jul 04 '18 at 19:55
  • Alecos. I looked more carefully and it does make slightly more sense than I first thought because you don't need to difference the expectations. Basically, take the equation between "and" and "so that" on page 3. Then, lag it by one period and subtract it from itself. The result you obtain is $y_s - y_{s-1} = \epsilon_s - \epsilon_{s-1}$. Still it's a weird MA(1) because the two noise terms on the right hand side could be added and thought of as pure noise since the MA parameter is -1. Interesting anyway. – mark leeds Jul 05 '18 at 02:39
  • @markleeds Mark, I don't get your calculations. I find $$y_{t+s-1} = y_0+b(t+s-1) +\sum_1^tu +\sum_{t+1}^{t+s-1}u$$

    and so

    $$y_{t+s} - y_{t+s-1} = b+u_{t+s}$$ Once we have assumed a starting value, only the frontier terms are lagged

    – Alecos Papadopoulos Jul 05 '18 at 03:01
  • You are absolutely correct. I made a mistake by doing it in my head. I learned from my mistakes and proceeded to do it on paper to make sure no dumb mistakes and it turns out that the second difference is MA(1), so $y_{t+s} - y_{t+s-1} = y_{t+s-1} - y_{t+s-2} + u_{t+s} - u_{t+s-1}$ so it can be viewed as an ARIMA(0,2,1). But that's not what the question asked so you are absoliutely correct that the question is wrong. Thanks for fix. – mark leeds Jul 05 '18 at 05:38
  • Alecos: my apologies for noise also. I'll be more careful in the future. It's pretty bad though when a question asks you to do something that's not possible !!!!! I forget the book but I recently read an amazon review where the author purposely leaves typos in the book ( some technical book, either econometrics or DSP. I don't remember ) to make sure that readers read carefully. that's definitely not a book I that want to purchase !!!!!!! all the best and thanks again. – mark leeds Jul 05 '18 at 05:48
  • @markleeds Mistakes on purpose is a cardinal sin in educational material. Conflicting results obtained by purposeful mistakes are good tools for clarifying issues though, as long a the writer asks the reader to find out what went wrong. See for example this post, https://economics.stackexchange.com/q/18811/61 – Alecos Papadopoulos Jul 05 '18 at 08:02
  • I agree but the author of the book isn't even hinting where the typos are. that's terrible. I'll check out the link. – mark leeds Jul 05 '18 at 10:22