1

I have estimated an ARCH(1) model using a skew-$t$ distribution. The results summary is:

Summary of GARCH results

I am wondering how to get an estimate for the autocovariance given the parameters. Assuming mean zero, I am getting:

\begin{aligned} X_{t+1} &= \omega + \alpha X_{t} + \epsilon_{t+1} \\ \text{Cov}(X_{t},X_{t+1}) &= \mathbb{E}[X_{t+1}X_{t}] - \mathbb{E}[X_{t+1}]\mathbb{E}[X_{t}] \\ &= \mathbb{E}[X_{t}(\omega + \alpha X_{t} + \epsilon_{t+1}) \\ &= \mathbb{E}[\alpha X_{t}^{2}] \end{aligned}

How do I proceed from here? Is this even correct?

deblue
  • 231

1 Answers1

1

Based on the output you have shared, your model seems to be ARCH(1) with zero mean: \begin{aligned} x_t &= \mu_t + u_t, \\ \mu_t &= 0, \\ u_t &= \sigma_t \varepsilon_t, \\ \sigma_t^2 &= \omega + \alpha_1 u_{t-1}^2, \\ \varepsilon_t &\sim i.i.D(0,1,\eta,\lambda) \end{aligned} where $D$ is standardized skewed Student-$t$ distribution with zero mean and unit variance.

Autocovariance of $\{x_t\}$ must be zero at all lags aside from zero, since \begin{aligned} \text{Cov}(x_t,x_{t-h}) &= \text{Cov}(u_t,u_{t-h}) \\ &= \mathbb{E}(u_t u_{t-h})-\mathbb{E}(u_t)\mathbb{E}(u_{t-h}) \\ &= \mathbb{E}(u_t u_{t-h})-0\cdot 0 \\ &= \mathbb{E}(\sigma_t\varepsilon_t \sigma_{t-h}\varepsilon_{t-h}) \\ &\stackrel{*}{=} \mathbb{E}(\mathbb{E}(\sigma_t\varepsilon_t \sigma_{t-h}\varepsilon_{t-h}\mid I_{t-1})) \\ &\stackrel{**}{=} \mathbb{E}(\sigma_t\sigma_{t-h}\varepsilon_{t-h}\mathbb{E}(\varepsilon_t \mid I_{t-1})) \\ &= \mathbb{E}(\sigma_t\sigma_{t-h}\varepsilon_{t-h}\cdot 0) \\ &= 0. \end{aligned}

*By the law of iterated expectation.
**Note that $\sigma_t$ is known as of time $t-1$, so we are able to move it outside of $\mathbb{E}(\cdot \mid I_{t-1})$.

There may be a slight problem with $\hat\alpha_1=1.000$, though, implying that the conditional variance is an integrated process. However, I guess the non-rounded value is just under $1.000$, as this might be forced by the software that prevents integrated conditional variances.

Richard Hardy
  • 67,272
  • Can you please expand on the step with the double expectations? Where does that comes from? Also, where does the autocorrelation come from in this model then? Is it from the volatility? – deblue Dec 07 '22 at 09:52
  • @deblue, regarding double expectations, this is know as the law of iterated expectations. It is a mathematical trick that can be quite useful in situations like this. Regarding autocorrelation in (G)ARCH processes with a constant mean, squared residuals ${u_t^2}$ have nonzero autocovariance for some lags $h>0$. – Richard Hardy Dec 07 '22 at 09:58
  • Can you please confirm my reasoning? Assuming mean zero, we are interested in $E[x_{t}^{2}]$, and therefore in $x_{t}^{2}$. Since the model is $x_{t} = \mu + u_{t}$, thus $x_{t}^{2} = u_{t}^{2}$. I am struggling to understand how the squared residuals affect $x_{t}$ via $\sigma_{t}$ (the confusion comes for me because the model for $x$ involves the standard deviation, not the variance). – deblue Dec 07 '22 at 12:56
  • I think we are usually interested in $x_t$, not $x_t^2$. E.g. we are interested in log-returns on stock prices, not in squares of log-returns. We can then characterize distributional characteristics of $x_t$ such as (conditional) mean and (conditional) variance. Take a look at some threads on GARCH and ARMA vs. GARCH (e.g. this) to learn more. – Richard Hardy Dec 07 '22 at 13:00