As far as I'm concerned, constrained optimization is a less-than-optimal way of avoiding strong fluctuations in your parameters for the independents due to a bad model-specification. Pretty often a constraint is "needed" when the variance-covariance matrix is ill-structured, when there is a lot of (unaccounted) correlation between independents, when you have aliasing or near-aliasing in datasets, when you gave the model too many degrees of freedom, and so on. Basically, every condition that inflates the variance on the parameter estimates will cause an unconstrained method to behave poorly.
You can look at constrained optimization, but I reckon you should first take a look at your model if you believe constrained optimization is necessary. This for two reasons :
- There's no way you can still rely on the inference, even on the estimated variances for your parameters
- You have no control over the amount of bias you introduce.
So depending on the goal of the analysis, constrained optimization can be a sub-optimal solution (purely estimating the parameters) or inappropriate (when inference is needed).
On a side note, penalized methods (in this case penalized likelihoods) are specifically designed for these cases, and introduce the bias in a controlled manner where it is accounted for (mostly). Using these, there is no need to go into constrained methods, as the classic optimization algorithms will do a pretty good job. And with the correct penalization, inference is still valid in many cases. So I'd rather go for such a method instead of putting arbitrary constraints that are not backed up with an inferential framework.
My 2 cents, YMMV.