0

This is a simple method to transform samples from $\mathcal{N}(0, 1)$ into samples from $\mathcal{N}(\mu, \sigma)$ with arbitrary $(\mu, \sigma)$, without having to re-sample from $\mathcal{N}(\mu, \sigma)$.

My question is: is this mathematically true? Shouldn't this be true?

$\mathcal{N}(\mu, \sigma) = \sigma * \mathcal{N}(0, 1) + \mu$

Gabriel
  • 4,282
  • 2
    Yes, it's true; this is likely a duplicate question – jwimberley May 17 '22 at 14:19
  • Couldn't find a duplicate but if you point me to it I'll vote to close this one – Gabriel May 17 '22 at 14:21
  • 1
    The duplicate covers the standard deviation component. The rest is demonstrating that adding $\mu$ shifts the distribution mean from 0 to $\mu$. From the linearity of expectation and a constant $c$, $E[X+c] = E[X] + E[c] = E[X] + c$. Proof complete. – Sycorax May 17 '22 at 15:04
  • 1
    @Gabriel - Just a minor point: a common notational convention has the second parameter of the $\mathcal{N}$ being $\sigma^2$ rather than $\sigma$. It's fine to define it as $\sigma$ but it can lead to confusion if you're not watching the definition carefully, and many sources rely on the convention without restating it, so it would be easy to miss a clash of notation when working with other sources (eg seeing $N(1,4)$ and thinking it meant $\sigma=4$ rather than $\sigma^2=4$. It's important to keep this hazard in mind as you read around. (I often end up writing say $N(1,2^2)$ to cue readers). – Glen_b May 17 '22 at 23:46

0 Answers0