7

Which test is required to test whether coefficients estimated as part of ARIMA procedure is different from 0? And how does one compute this test?

I am reading some procedures regarding the inversion of Hessian matrix during MLE. Is this some direction I can pursue? I really do not want to get into bootstrapping or some other resampling as my dataset is large and takes forever to compute.

  • Why would you want to? Using significance of coefficients is a very bad way to choose the model order. Is there any conceivable use of such tests that is helpful? – Rob Hyndman Aug 11 '14 at 09:00
  • Not for model selection. For model checking and diagnostics. – Cagdas Ozgenc Aug 11 '14 at 10:08
  • What sort of checking and diagnostics? I do not think the tests will tell you anything useful that cannot be seen using far more effective approaches. – Rob Hyndman Aug 11 '14 at 10:43

1 Answers1

1

You can use use the standard error of the coefficient to COMPUTE the t value . INTERPREATION of the t value i.e. converting the t value to a probability using the normal distribution REQUIRES that the errors from the model are Gaussian. To test the Gaussian assumption one must verify the following:

  1. There are no pulses/level shifts/seasonal pulses/time trends in the error process
  2. The error variance is free of structural change i.e. deterministic change suggesting the possible need for Weighted Least Squares.
  3. The error variance is not relatable to the expected value suggesting the possible need for a power transform e.g. logs/square roots/reciprocals etc.
  4. The parameters of the ARIMA model are invariant over time suggesting time varying parameters (coefficients)
  5. The square of the errors is not describable as an ARIMA process possibly suggesting the need for a GARCH augmentation.

Good software not only estimates the parameters but tests the assumptions that are necessary to convert the computed t values to probabilities. Alas and alack this is often missing !

IrishStat
  • 29,661
  • Obviously my problem is to compute the standard errors for the coefficients. I am sure there is software for this, but I need to learn how to do it myself. – Cagdas Ozgenc Jun 06 '14 at 15:19
  • The standard errors are obtained by taking the diagonal elements of the inverse X'X matrix and multiplying them by the estimated standard deviation of the errors. – IrishStat Jun 07 '14 at 00:45