1

I have a problem where I have to think of an example to explain a practical example of consistency and unbiased. The example I thought of is the sample mean.

Consistency is when the estimator (sample mean) reaches the population mean when the sample size becomes infinity.

Unbiasedness is when the estimator (sample mean) will on average equal to the population mean for any sample size.

Is my analogy correct here?

utobi
  • 11,726
stats_noob
  • 1
  • 3
  • 32
  • 105

1 Answers1

7

Let $Y_1,\ldots,Y_n$ be a random sample from some distribution $F_\theta$. An estimator $\hat\theta$ for $\theta$ is called unbiased if and only if the bias

$$b(\theta) = E_\theta(\hat\theta)-\theta,$$

equals 0, otherwise, it's called biased.

In many cases $b(\theta)$ is not exactly zero but it's a function of $n$ and s.t. $\lim_{n\to\infty} b(\theta) = 0$. In this case, the estimator is called asymptotically unbiased.

On the other hand, an estimator is called consistent if it converges in probability to $\theta$. That is if, for any $\epsilon>0$,

$$ \lim_{n\to\infty}P_\theta(|\hat\theta -\theta|<\epsilon) = 1. $$

Consistency is related to unbiasedness, indeed, a necessary and sufficient condition for consistency is that

$$ \lim_{n\to\infty} b(\theta) = 0,\text{ and } \lim_{n\to\infty}\text{var}_\theta(\hat\theta)=0. $$

utobi
  • 11,726
  • 1
    It's good not to forget about asymptotically unbiased. – User1865345 Oct 11 '23 at 08:07
  • 1
    On the difference between asymptotically unbiased and consistent, also see https://stats.stackexchange.com/questions/280684/intuitive-understanding-of-the-difference-between-consistent-and-asymptotically/ – Glen_b Oct 11 '23 at 23:11
  • thanks for the pointer @Glen_b. Sadly that post was also closed as dup – utobi Oct 12 '23 at 06:59