3

Let $X_i$ be a binary random variable where $P(X_i = 1) = p$. I want to find the MLE estimator for $\theta = p(1-p)$.

The likelihood function should be $$ L(\theta) = \prod_{i=1}^n p^{x_i} (1-p)^{1-x_i} = p^{\sum_{i=1}^nx_i}(1-p)^{n- \sum_{i=1}^nx_i}, $$ How should I proceed? Seems like I can not just take the derivative with respect to $\theta = p(1-p)$.

Edit: You use the plug in estimator, so you find $\hat{p}_{mle}$ first, then $\hat{\theta} = \hat{p}(1-\hat{p})$.

Now suppose I want to find the asymptotic distribution for $\hat{\theta}$. how do I compute the information matrix?

  • 1
    If this is part of a homework assignment, please edit your question to add the self-study tag. – microhaus May 26 '23 at 18:23
  • Consider taking log-likelihood rather than likelihood. 2. When computing partial derivatives of log-likelihood and setting to 0, do so with respect to $p$ and solve for the maximum likelihood estimator $\hat{p}$. 3. Now treat your variance $\theta = f(p)$ as a function of parameter. You can now plug-in $\hat{p}$ so that $\hat{\theta} = f(\hat{p})$.
  • – microhaus May 26 '23 at 18:39
  • Why Is this $\hat{p}(1- \hat{p})$ still a MLE? – Mondayisgood May 26 '23 at 18:45
  • Consider what happens if you choose a $\theta^* \neq \hat{p}(1-\hat{p})$. You are, implicitly, choosing an estimate of $p \neq \hat{p}$, i.e., one that is not the MLE. Therefore $\theta^*$ cannot be the MLE of $\theta$. – jbowman May 26 '23 at 18:48
  • @SextusEmpiricus Yes. I have another question, suppose I want to find the asymptotic distribution for $\theta = p(1-p)$, how do I find the information matrix in this case? – Mondayisgood May 26 '23 at 18:50
  • 2
    $\theta = p(1-p)$ does not fully specify the likelihood function. How would you define the MLE? – Sextus Empiricus May 26 '23 at 18:56