6

Hi Quantitative Finance Stack Exchange,

I'm looking for an opinion on a simple question. Suppose I use a Garch(1,1) model to make a volatility forecast.

At time $t$, I have realized volatility $\sigma_t$ and forecasted volatility $\hat{\sigma}_t$. I understand that strategies commonly use $\hat{\sigma}_t$ to decide on risk management, i.e. liquidate if $\hat{\sigma}_t>10\text{bps}$.

However, I wish to have a measure on when $\hat{\sigma}_t$ is significantly larger than $\sigma_t$. I tried the F-test on $\frac{\hat{\sigma}_t}{\sigma_t}$. The degrees of freedom of $\sigma_t$ is the number in samples of $\sigma_t$. What is the degrees of freedom of $\hat{\sigma_t}$?

I'm using R's rugarch package.

Sincerely Yours, Donny

Cettt
  • 1,446
  • 9
  • 26
Donny Lee
  • 111
  • 1
  • 3
  • I would propose something different, which is more in line with current forecasting literature: You can compare your forecasted and realized volatility via different loss functions such as MSE, QLIKE, MAE. Instead of concluding that one model provides lower losses than the other, you can use statistical tests that evaluate the significance of the performance increase over the other models. This is called forecast comparison analyses and I have detailed a thorough answer here which further include links to research papers deriving the methods. – Pleb Jun 04 '22 at 20:21

0 Answers0