5

Please pardon me for asking a simple question but I find this field rather very hard. Anyway, here it goes:

  1. We are trying to find MLE for a distribution. Now for a given distribution, its MLE is a number that is difficult to estimate. So what we do then is to use Sufficient Statistics for that particular distribution to find MLE over that distribution. Am I correct here? I will be absolutely grateful for any help here.
user1343318
  • 1,341
  • You're asking if you should "use" Suff. Stats to find MLE [over] that [distribution]. What do you mean by use? Which distribution are you dealing with? Why is finding the MLE directly difficult? – Taylor Jan 05 '13 at 04:39

1 Answers1

4

The maximum likelihood estimator (MLE) is one type of estimator of a parameter, not a distribution. You can find the MLE of a parameter so long as the likelihood is sufficiently nice, i.e. the maximum of the likelihood function, as a function of the parameters, exists and is unique. This is independent of whether a nontrivial sufficient statistic for the parameter exists. If you are finding the MLE numerically I can't see a good reason to worry about sufficient statistics.

However, if you do have a non trivial sufficient statistic $T(\mathbf{X})$ for a parameter, you might be able to simplify your computation. Suppose you have data $\mathbf{X}=(X_1,...,X_n)$ and $\mathbf{X}\sim P_{\theta}$ for $i=1,...,n$. By the Factorization Theorem, you can write the PDF of $\mathbf{X}$ as

$$ f(\mathbf{X}|\theta) = g(T(\mathbf{X}|\theta)) h(\mathbf{X}) $$

and thus the likelihood function, which is simply $f(\mathbf{X}|\theta)$ considered as a function of $\theta$, is proportional to $g(T(\mathbf{X}|\theta))$. Then you can maximize $g(T(\mathbf{X}|\theta))$ instead of the full likelihood. It seems to me this would only be worthwhile if you have a very complicated likelihood function for which numerical optimization is excessively time consuming.

caburke
  • 1,422