Let $f:[0,1]^{80} \rightarrow [0,1]$ be some function, and say I have a computationally expensive way to calculate $f(x)$ for each $x \in [0,1]^{80}$ (expensive = 40s per query).
The goal is to maximize $f$ - any advice?
My ideas so far are:
- Gradient Descent - numerically approximating the gradient at each point along the way with a few samples. This method is problematic since queries are so expensive (therefore, I can't properly approximate the gradient).
- Approximation - sampling $m$ values $x \in [0,1]^{80}$ and calculating $f(x)$, approximating $f$ using some regression model on these $m$ values, maximize the approximation function (which will have a computable gradient), hoping that the maximizing $x$ will approximately maximize $f$.
BBin R. – Karsten W. Mar 02 '18 at 18:40