(Please read until the end)
Consider two ways of writing the exponential distribution:
(A) $\frac{1}{\beta} e^{-\frac{x}{\beta}}$ and
(B) $\theta e^{-x\theta}$
If I estimate $\beta$ or $\theta$ from a random sample $\{x_1,x_2,...,x_n\}$ via the Maximum Likelihood estimation, I get similar expressions:
(A) $\hat{\beta}_{MLE}= \frac{\sum{x_i}}{n}$, and
(B) $\hat{\theta}_{MLE}= \frac{n}{\sum{x_i}}$
The problem occurs when I calculate their bias:
(A) $\mathbb{E}[\hat{\beta}_{MLE}]= \beta $, and hence $\hat{\beta}_{MLE}$ is an unbiased estimator while
(B) $\mathbb{E}[\hat{\theta}_{MLE}] \neq \theta $, and hence $\hat{\theta}_{MLE}$ is not an unbiased estimator. See Bias of the maximum likelihood estimator of an exponential distribution for why it's so.
My question is how can the same parameter be biased and unbiased at the same time?
My question is more at a philosophical level-- Intuitively biasedness should only depend on three things:
- Which type of distribution is it?
- What kind of estimation algorithm is it?
- Which one of the several parameters of the distribution is it?
and NOT on how the distribution function is written.
So effectively, it feels like whatever the bias metric was developed for, seems to not be able to faithfully measure it.
To answer my question can someone put light on an intuitive interpretation of biasedness? What real-life quantity is the bias metric trying to capture? And is there any other metric better than bias that we can substitute it with?