0

I'm trying to understanding the standard deviation of the sampling distribution from the Central Limit Theorem.

$\bar{X}\rightarrow (\mu ,\frac{\sigma^{2} }{n})$

enter image description here

I can understand from a mathematical point of view that when n increases, it will reduce the standard deviation of the sampling distribution, but I can't explain this phenomenon from the perspective of the real world. Is there a more intuitive explanation?

Ivan
  • 71

1 Answers1

1

When you draw a particular sample from a distribution, your observations will either be exactly at the mean (unlikely), fall to the left of the mean (less than) or right of the mean (greater than).

As you increase the size of your sample, it becomes more likely that an observation to the left of the mean will get cancelled out by some other observation in the sample falling to the right. That is, your samples become more "balanced".

Taking this to the limit, if you took a sample of infinite size then your sample mean will be the point exactly at the distribution mean.

Hope that helps.

ldinh
  • 26
  • Thank you for your reply, from your point of view, I try to use dice to understand, when n=1, the probability of $\bar{X}=1$ or $6$ is $1/3$, and when n=2, the probability of $\bar{X}=1$ or $6$ = $2/36$. As n becomes larger, the probability of the average = 1 or 6 approaches 0, which proves that the sample average is more concentrated in the expected value of the dice. Inspiration also comes from https://elonen.iki.fi/articles/centrallimit/index.en.html#demo – Ivan Mar 14 '22 at 09:11