7

Of the two best known techniques for feature scaling in Machine Learning:

  • Normalizing a feature to a $[0, 1]$ range, through $x - x_{min} \over x_{max} - x_{min}$

or

  • Standardizing the feature (also referred to as z-score), through $x - μ \over σ$, where $μ$ is the mean and $σ$ is the standard deviation.

Is there any reason to prefer one over the other? Does any one outperform the other when used with certain algorithms?

kfn95
  • 346
  • 1
  • 6

0 Answers0