0

I usually normalise my input variables for linear models (i.e. apply a log transformation, or rank-based inverse normal transformation). One of the reasons I like using tree based methods like Random Forests and XGBoosts is that I do not need to transform the input features (when using these algorithms for classification).

I know that it is important to scale input features for neural networks (i.e. subtract values by the mean and divide by the standard deviation), but is there a need ensure input features are approximately normal when using neural networks for classification (like I would for linear models)?

Richard Hardy
  • 67,272
chipsin
  • 117

1 Answers1

2

is there a need ensure input features are approximately normal when using neural networks for classification

No. And, this isn't required even for linear models. An answer related to linear regression is this thread.

gunes
  • 57,205