If one has data that's assumed to be normal distributed and want to use it as input in a machine learning model, why not first standardize the data and then normalize (min max scale it between zero and one)?
So first transform as follows
$$ S = \frac{X - \mu}{\sigma} $$
...and then transform it one more time to $$ X_{standaardizedAndNormalized} = \frac{S - S_{min}}{S_{max}-S_{min}} $$