0

This is what I learned from the class on how posterior is updated for Gaussian process: enter image description here

It seems the computational cost of the matrix inversion only scales with the size of K_t, which depends on the number of data points x rather than the dimension of x. Intuitively I can see in higher dimensional space there are more degrees of freedom so more data points are needed to reduce those uncertainties and K_t becomes larger, but is there some formula from which we can directly see how the computation scales with dimensionality?

Sam
  • 363
  • Do I understand correctly that your question boils down to "how does the computational cost of matrix inversion depend on the size of the matrix?". Note that the answer can be different for dense and sparse matrices. – paperskilltrees Oct 27 '21 at 05:39
  • I understand the computational cost of matrix inversion is O(n^3) when there is no structural properties to exploit. My question is more on is there a formula that tells us why high dimensional BO is costly as opposed to just the 'the higher dimension, the more data are needed' intuition – Sam Oct 27 '21 at 05:52

0 Answers0