1

I implement a GARCH-DCC model in Python, for number of asset = 2. My implementation is the following :

def garch_dcc_specification(
    self,
    eps_last: Optional[np.ndarray],
    cond_var_last: Optional[np.ndarray],
    q_last_t: Optional[np.ndarray],
) -> SpecResult:
    if eps_last is None:
        eps_last = np.zeros(self.n)
if q_last_t is None:
    q_last_t = np.zeros((self.n, self.n))

epsilon_square_last = np.array([eps_last_i ** 2 for eps_last_i in eps_last])
# first, evaluate the garch cond. variance. 
# (garch_alpha, beta and omega are 1D arrays)
cond_var_t = (self.garch_omega
              + self.garch_alpha * epsilon_square_last
              + self.garch_beta * (cond_var_last if cond_var_last is not None else np.zeros(self.n)))

d_t = np.diag([math.sqrt(v) for v in cond_var_t])

if cond_var_last is not None:
    d_last_t = np.diag([math.sqrt(v) for v in cond_var_last])
    v_last_t = inv(d_last_t).dot(eps_last)
else:
    v_last_t = np.zeros(self.n)

# DCC specification for the conditional correlation
# note: DO NOT DO v[t - 1].dot(v[t - 1].transpose()) : since v[t - 1] is a 1D array, result would be a number
q_t = (self.dcc_r * (1 - self.dcc_alpha - self.dcc_beta)
                   + self.dcc_alpha * v_last_t.reshape(1, -1).transpose().dot(v_last_t.reshape(1, -1))
                   + self.dcc_beta * q_last_t)

# standardize q to get a real correlation matrix 
r_t = np.zeros((self.n, self.n))
for i in range(self.n):
    for j in range(self.n):
        r_t[i][j] = q_t[i][j] / math.sqrt(q_t[i][i] * q_t[j][j])

# transforms to a variance-covariance matrix by incorporing the cond variances 
h_t = d_t.dot(r_t).dot(d_t)
return GarchDccParams.SpecResult(cond_var = cond_var_t, q = q_t, h = h_t)

def generate_innovations(self, length: int) -> np.ndarray: innovations = np.zeros((length + 1, self.n)) spec_res: List[GarchDccParams.SpecResult] = [] for t in range(0, length + 1): spec_res.append(self.garch_dcc_specification( eps_last = innovations[t - 1] if t != 0 else None, cond_var_last = spec_res[t - 1].cond_var if t != 0 else None, q_last_t = spec_res[t - 1].q if t != 0 else None, )) innovations[t] = np.random.multivariate_normal(np.zeros(self.n), spec_res[t].h)

To check my implentation, I control that the generate_innovation() empirical pearson correlation coefficient (with np.corrcoef) is equal to the input self.dcc_r matrix correlation coefficient, which should be the unconditional correlation of the overrall generated innovation, if I understand correctly.

When running with constant variance in the GARCH (garch alpha and beta = 0), I get a good pearson coef coefficient that is equal to the one I set in the input self.dcc_r Though, when the conditional variance is moving (garch alpha and beta > 0), I don't get the same coefficient, I get always a lesser empirical correlation coefficient then expected in the DCC_R. For example, when running with 10000 points, and with an input dcc_r correlation coef. of 0.9, I get an empirical unconditional correlation, in my generated innovations, of around 0.75

PS: To simplificate, I set dcc_alpha and dcc_beta to 0 so only the dcc_r matrix is taken into account (we have a GARCH-CCC model instead of a GARCH-DCC, cond. correlation is always the same). The "problem" (if it is one) still occurs, still when GARCH alpha/beta > 0.

Is it normal ?

  • What exactly are you calculating the correlation between (the one that is not what you think it should be)? – Richard Hardy Mar 27 '23 at 16:35
  • I'm computing the empirical unconditional correlation of the generated innovations (using np.corrcoef I get the correlation matrix, and I take the value that is not on the diagonal, there is only one value because asset number = 2, I will update my queston). I'm expecting this correlation coefficient to be equal to the one I gave as input in the CCC matrix. When GARCH alpha, beta > 0 (variance changing over time), these 2 number are not equal, I don't understand why (I don't know if it is normal or if this is the sign of an error in my implementation) – Jerem Lachkar Mar 27 '23 at 19:01
  • What are the generated innovations? Are these obtained from a fitted model? If so, let me call them residuals rather than innovations. (Innovations are the theoretical ones or the ones used for simulating a DCC process.) Are they the standardized residuals (for which your doubts would be justified) or raw residuals (that display GARCH patterns)? The raw residuals might not have quite the same correlation as the standardized residuals, because the assumption about correlation is made about the standardized innovations of the process. – Richard Hardy Mar 27 '23 at 19:46
  • Actually the code I provided is just used to generate garch-dcc random innovations, provided some GARCH-DCC parameters (to be used to generate a random realization of more general model, for example a ECM with GARCH-DCC innovations). The more general model call conditional_variance_process.generate_innovations() (conditional_variance_process can be an instance of GarchDcc or whatever conditional variance process, but it will always have a generate_innovation fn). – Jerem Lachkar Mar 27 '23 at 20:17
  • So, without considering the model behind that will use it to generate the random realization (the VECM for example), the issue is that the innovation generated have not the same uncond. correlation than the CCC matrix provided in input, and this happens when input GARCH params > 0 only. – Jerem Lachkar Mar 27 '23 at 20:18
  • OK, so you start from multivariate i.i.d. standardized innovations with a certain unconditional correlation matrix. When you change their variances over time according to GARCH, the correlations get distorted. This is to be expected, though only to a small extent. Is the distortion you are observing large? – Richard Hardy Mar 28 '23 at 07:55
  • Yes this is it. More precisely, I get the generated innovations from the GARCH-DCC specification from Engle 2002, iteratively (each iteration, I first forecast the variance using regular GARCH, then forecast the correlation using DCC spec., and finally get from the 2 the var-cov matrix). As you say, when the cond variance changes over time (i.e. when the garch alpha/beta of the garch-dcc are > 0), the empirical uncond. correlation of the overall generated innovation is not equal to the one I provided in DCC-R, as input. – Jerem Lachkar Mar 28 '23 at 08:12
  • I’m confused because from all I read over garch-dcc, R matrix should precisely define the unconditional correlation of the residual, and it should be an average of the cond. correlation over time .. – Jerem Lachkar Mar 28 '23 at 08:13
  • But in my case for example I set input dcc_r = [[1, 0.9][0.9, 1]], so I expect a correlation of 0.9. I see that the cond. correlation goes around 0.9 up and down when I plot it, as expected in a garch-dcc. But when I get my uncond correlation from the generated innovations to control it, it’s not the same as the average cond. correlation. – Jerem Lachkar Mar 28 '23 at 08:16
  • Precisely, the difference is not very huge. When my cond. corr is 0.9, I get empirical uncond. corr = 0.75. When cond. corr = 0.98 I get uncond. corr to 0.9. Unconditional is always lesser than conditional but I can’t explain why – Jerem Lachkar Mar 28 '23 at 08:17
  • Take original paper https://archive.nyu.edu/bitstream/2451/26482/2/02-38.pdf, page 8: They say that "A simple estimate of R is the unconditional correlation matrix of the standardized residuals". This estimator would be biased and not consistent then ? (since it does not converge to true value when sample size increase..) – Jerem Lachkar Mar 28 '23 at 09:55
  • 1
    Note the word standardized in standardized residuals. Meanwhile, you seem to be worried about the correlation estimate from raw/unstandardized residuals not being equal to that of the standardized one. – Richard Hardy Mar 28 '23 at 13:48
  • very good point, it works after trying residual standardized residual. Thank you for pointing this out :) – Jerem Lachkar Mar 28 '23 at 14:43
  • If that solves your problem (does it?), should I write it up as a brief answer so that we can close the thread as answered? – Richard Hardy Mar 28 '23 at 15:18
  • Looks like it does, there is still a tiny difference though: correlation of 0.95 in R gives standardized residual conditional correlation to 0.945 with 100k points. I would have expected that 100k points would bring more precision than 0.005 difference. But much better than without standardising, thank you very much ! You can post an answer I'll upvote it – Jerem Lachkar Mar 28 '23 at 15:39

1 Answers1

1

Note the word standardized in standardized residuals. Meanwhile, you seem to be worried about the correlation estimate from raw/unstandardized residuals not being equal to the theoretical correlation of the standardized ones.

Suppose we have standardized innovations $(z_{1,t},z_{2,t})^\top$ that have a certain unconditional correlation $\rho=\text{Corr}(z_{1,t},z_{2,t})$. When multiplied by the time-varying standard deviation, they become raw innovations $(\varepsilon_{1,t},\varepsilon_{2,t})^\top=(\sigma_{1,t}z_{1,t},\sigma_{2,t}z_{2,t})^\top$. The unconditional correlation between them, $\xi=\text{Corr}(\varepsilon_{1,t},\varepsilon_{2,t})$ need not be equal to $\rho$. While $\text{Corr}(aX,bY)=\text{Corr}(X,Y)$ for constants $(a,b)^\top$, this does not apply for random variables $(U,V)^\top$: $\text{Corr}(UX,VY)\not\equiv \text{Corr}(X,Y)$.

The remaining problem is why you still get a noticeable discrepancy between the expected and estimated unconditional correlation, $(\rho,\tilde\rho)^\top=(0.950,0.945)^\top$ with a huge sample of 100k points.

Richard Hardy
  • 67,272