In Patton (2011) the author finds that both the MSE and the QLIKE loss function are robust when used to compare rivalling volatility forecasting models, which means that using a proxy for volatility gives the same ranking as using the true (unobservable) volatility of an asset.
In my current project I am comparing a family of GARCH/AGARCH models and while the MSE suggests that nothing outperforms a GARCH(1,1), the QLIKE statistic suggests that an APARCH(1,1) model performs significantly better.
Is this caused by the two loss functions penalising deviations differently? Specifically, what do the two loss functions place the highest penalty on, i.e. how do I interpet this?
I am hoping this is not down to some trivial coding error.
#MSE
MSE<-function(sigmafc,RV){
MSE=1/length(sigmafc)*sum((sigmafc^2-RV)^2)
return(MSE)
}
#QLIKE
QLIKE<-function(sigmafc,RV){
varfc=sigmafc^2
QLIKE=sum(
(RV/varfc-log(RV/varfc)-1)
)
return(QLIKE)
}
I gather that the MSE depends on forecast errors, while QLIKE depends on standardised errors, but how would I interpret this?
$QLIKE=\frac{\hat{\sigma}^2}{h}-\log{\frac{\hat{\sigma}^2}{h}}-1$ with $\hat{sigma}^2$ being the term for the volatility proxy and $h$ the forecast for a given period.
The reasoning, if I recall correctly, is that this gives a QLIKE value of 0, if the forecast and the volatility proxy are identical.
– Pedestrian Jul 27 '16 at 13:58