O'Brien (1988) has shown that a strong method for doing multivariate testing is to reverse the problem. That is, instead of seeing if the category impacts the measured values, see how the measured values impact that category. These are logically equivalent notions, and approaching the problem this way, such as with a logistic regression, has advantages over, say, Hotelling's $T^2$ test.
In that link, Harrell writes:
If there is not just a difference in means but a difference in variance for a response across the groups, you include a square term in the logistic model for that response. I suppose that if skewness differs you could include a cube term.
The comments about squaring and cubing make sense to me, and I suppose raising a feature to a power would correspond to testing for differences in the moment corresponding to that power (though perhaps not central moments).
Is this thinking correct? What would be the interpretation of other basis functions of the original features, such as $x_1x_2$ or $x_1^2x_2^3x_3?$ Can we test for particular differences in copulas by examining interactions like these?
REFERENCE
O'Brien, Peter C. "Comparing two samples: extensions of the t, rank-sum, and log-rank tests." Journal of the American Statistical Association 83.401 (1988): 52-61.