2

I have a similar Maximum Likelihood problem setup and a follow-up question to the question asked here

My constraints involve vector parameters $\vec{w}=\{w_1,w_2,\cdots,w_K\}$ and $\vec{\mu} = \{\mu_1,\cdots,\mu_K\}$, each K-vectors with $K>1$. The equality constraints are $\sum_{i=1}^K w_i = 1$ and $\sum_{i=1}^K w_i \mu_i = 0$. There are many other parameters and also various inequality constraints.

At first, I did maximum likelihood estimation in the reduced parameter space, as suggested by the accepted answer at the link. In other words, I used the constraints to eliminate both $w_K$ and $\mu_K$. Let's say this reduced problem has $N$ unknown parameters. An MLE optimum was found and it was interior to the region defined by the various inequality constraints. The ($N \times N$) Fisher information matrix at the optimum in the reduced space was inverted to estimate standard errors for the parameters present. That's fine, but what are the asymptotic standard errors for $\hat{w}_K$ and $\hat{\mu}_K$?

I also solved the problem in the full parameter space, which includes $w_K$ and $\mu_K$, and the constraints are enforced by the optimizer. The same optimum was found. Inverting the larger, $(N+2) \times (N+2)$ Fisher info matrix yields standard errors for all $N+2$ parameters. But now the question becomes: are those errors correct in the presence of the equality constraints?

In summary, my question boils down to: what is a good procedure for estimating asymptotic standard errors for $\hat{w}_K$ and $\hat{\mu}_K$?

EDIT: I think I've found my answer here with the answer by Alecos Papadopoulos.

alan
  • 21

0 Answers0