Given a probability density say $f(x)$ with parameters $\theta$ and a sample of size n say $x_1,\ldots,x_n$ we can compute the MLE estimate say $\theta_n$ by passing $f$ and $x_1,\dots,x_n$ to optim in R.
We will have by the asymptotic normality of the MLE that,
$\sqrt n (\theta_n - \theta) $ converges in distribution to $\mathcal N(0,V)$ where $V$ is the inverse of the Hessian.
Now suppose we want to make sure that the parameter $\theta$ needs to satisfy a constraint, say $0\leq \theta \leq1$. Then we define
g($\omega$) = $\exp(\omega)/(1+\exp(\omega))$. And we consider the function $f(g(\omega))$. Now we pass $f \circ g$ and $x_1,\ldots,x_n$ to optim. Can we claim that
$\sqrt n ( \omega_n - \omega) $ will converge in distribution to $\mathcal{N}(0,V1)$ where $g(\omega)= \theta $. It's not clear to me if I can apply the asymptotic normality of MLE to $f \circ g$. Can someone explain this step in detail. I do realize that the next step will be the application of the delta method. I need clarity on this step though.