From a theoretical perspective is the Bayesian Optimal Classifier (BOC) the best possible classifier one can make? Better than NN and GBDT?
Let's say that we have two distributions $P(X,Y)$ and $P(X',Y')$. And we try to use the Bayesian optimal classifier to distinguish between them.
If the BOC has no performance (e.g. AUC=0.5), then can we say that $P(X,Y)\equiv P(X',Y')$?. Or there is the possibility that another learning algorithm has a better performance?
Note: I am not considering multivariate distribution testing.