This is what I learned from the class on how posterior is updated for Gaussian process:

It seems the computational cost of the matrix inversion only scales with the size of K_t, which depends on the number of data points x rather than the dimension of x. Intuitively I can see in higher dimensional space there are more degrees of freedom so more data points are needed to reduce those uncertainties and K_t becomes larger, but is there some formula from which we can directly see how the computation scales with dimensionality?