1

I coded a version of the adaptive lasso that does model selection for arch(q) (hopefully garch(q,p) soon) processes. It optimizes over a grid of 4 parameter. right now it works with:

 LamdaT <- seq(.5,1.7,by=.2)
  gamma0 <- 2#seq(.25,1.75,by=.25)
  gamma1 <- seq(.25,1.75,by=.25)
  gamma2 <- seq(.25,1.75,by=.25)

is there a way to narrow down the range or the stepsize so that the runtime decreases? Or is trail and error the best i can do?

mexx
  • 23

1 Answers1

2

The best approach would be not to use a grid search. Nearly any other algorithm would be more efficient starting from random search, and ending on specialized optimization algorithms doing Bayesian optimization or using other techniques (see e.g. hyperopt or optuna). They would usually do the job faster, giving you better quality results.

If a priori you don't have a good idea about the grid to search, the way to go would be to test some values and use them to narrow down the search space, then repeat the procedure recursively. Roughly speaking, this is what those specialized algorithms would be doing, but in a way that is proven to be optimal (vs ad hoc).

Tim
  • 138,066
  • 2
    Certainly these are good considerations for a naively implemented algorithm, but don't Lasso-oriented algorithms compute the solution for an entire grid almost as quickly as they compute it for a single value of the penalty? That's certainly the case for the glmnet package in R. If so, your recommendations could backfire by causing more programming effort to achieve a longer computation time! – whuber Mar 29 '23 at 17:45
  • @Tim big thanks! This actually reduced the computation time of each iteration by a lot and gave me way better results. – mexx Mar 30 '23 at 15:44