I understand the curse of dimensionality, and in machine learning at least, have heard that a minimum of 100-500 samples per class label is needed to effectively train an algorithm (leaving aside single shot learning techniques in development).
Is there a formula that provides guidance on an acceptable number of dimensions given a data set size, dimension size, and the levels of each dimension? I haven't found one but that seems like that sort of formula should exist.
I could use that formula to determine if dimensionality reduction was needed, and also avoid DR by leaving out features that are uncorrelated with the target variable.