I am interested in comparing a non-linear model with up to 12 parameters to many datasets. However, each instance of the model takes a significant amount of time to compute (~1 hour), so I am pre-computing instances of the model for various parameter values and then comparing these to all the different datasets.
There are various ways to sample parameter space - so far, I've come across regular grids (impossible here), sparse grids + interpolation, Monte-Carlo random sampling, and there are probably others. Which approach would be optimal for a fixed amount of computing resources, and therefore a fixed number of model instances?