2

It is frequently stated that BO can only be efficient for under 10 dimensions, or thereabouts. By efficient, I mean that the optimum is reached in an acceptable time, say less than a day given all data. Please give the open source package and the maximum dimension for your best guess at the most efficient BO implementation.

A comment has been put forward with an argument incorporating the time for function evaluation. Let's assume that function evaluation is 0 time. I just want to know about the crunch time, disregarding function evaluation.

Perhaps a better metric is the time it takes to compute the next point to measure give the latest function evaluation. Let's suppose that you only require that the computation of the next point to measure must return in less than one day. What is the maximum dimension?

  • This will depend on the task. 1. Bayesian Optimisation is ultimately Gaussian Process Regression. 2. If now it takes us 15' for a single function evaluation and we have 100 dimensions that's ~25 hours just to get 100 samples, so our covariance matrix is not rank deficient. 3. GPs work with large $d$ but usually is some variant like Sparse pseudo-input Gaussian process and I am unaware of a BOpt package using such advantage methods.
  • – usεr11852 Apr 09 '20 at 01:09
  • I agree that the efficiency depends on a lot of factors including function evaluation time and type of input space (discrete or continuous). However, the state of the art approaches are usually very fast upto ~15-20 dimensions. Please look at the Section G in the appendix of the following paper https://arxiv.org/pdf/1910.01739.pdf for more details and comparisons of runtimes on standard benchmarks. The code is also available here: https://github.com/uber-research/TuRBO . – randomprime Apr 15 '20 at 02:57