5

I am interested in comparing a non-linear model with up to 12 parameters to many datasets. However, each instance of the model takes a significant amount of time to compute (~1 hour), so I am pre-computing instances of the model for various parameter values and then comparing these to all the different datasets.

There are various ways to sample parameter space - so far, I've come across regular grids (impossible here), sparse grids + interpolation, Monte-Carlo random sampling, and there are probably others. Which approach would be optimal for a fixed amount of computing resources, and therefore a fixed number of model instances?

astrofrog
  • 191

1 Answers1

1

My understanding is that the optimal methodology is going to depend on the surface texture of parametrization errors. That is, how often do parameters reflect or manifest as interactions? If each parameter is an individual value that has its own distinct minima independent of all other parameters, model fitting should proceed easily with sparse grids and interpolation. If instead there are lots of local minima you might find that spare grids and interpolation will seldom fall on the same values. Given the time required to fit your model I doubt Monte-Carlo random sampling will be the optimal approach. Another approach which you haven't considered yet are genetic algorithms, but again convergence to a single answer may be difficult.

russellpierce
  • 18,599