I would like to estimate the probability density function of a data set with a very large number of samples (50,000+) and a large number of continuous variables (2,048).
Compute efficiency is somewhat important, so I would like to avoid approaches based on artificial neural networks.
Considering the high-dimensional setting, is kernel density estimation still an appropriate method? Are there any alternatives?