This is a simple method to transform samples from $\mathcal{N}(0, 1)$ into samples from $\mathcal{N}(\mu, \sigma)$ with arbitrary $(\mu, \sigma)$, without having to re-sample from $\mathcal{N}(\mu, \sigma)$.
My question is: is this mathematically true? Shouldn't this be true?
$\mathcal{N}(\mu, \sigma) = \sigma * \mathcal{N}(0, 1) + \mu$