What is the intuition, whether geometric or otherwise, to scale 1D scalar data to 0 mean and unit variance? The zero mean seems straightforward, particularly for ML algorithms, but what is the significance of unit variance? Why not scale by just the max in the data so that the data is bounded to [0,1]?
Asked
Active
Viewed 68 times