Here's an alternate method that you'll have more control over. The idea is to use scipy's minimize + autograd to do the heavy lifting for you.
We define our negative log-likelihood, which we wish to minimize. Note that I'm going to optimize the scale parameters in log-space, as this is much easier. (Take the exp to transform back)
We generate some fake data.
We set up the minimize code. Note the bounds argument. For each parameter, [mu1, log_sigma1, mu2, log_sigma2, w], we have a corresponding bound. A None represents unbounded in that direction.
printing results.x gives us our estimates.
from autograd.scipy.stats import norm
from autograd import numpy as np
from autograd import value_and_grad
from scipy.optimize import minimize
from scipy.stats import norm as norm_
def negative_log_likelihood(params, obs):
mu1, log_sigma1, mu2, log_sigma2, w = params
sigma1 = np.exp(log_sigma1)
sigma2 = np.exp(log_sigma2)
return -np.log(w * norm.pdf(obs, mu1, sigma1) + (1-w) * norm.pdf(obs, mu2, sigma2)).sum()
generate some fake data. Success will be if we recover these parameters.
obs = np.r_[
norm_(loc=2, scale=1).rvs(200),
norm_(loc=-0.75, scale=0.5).rvs(500),
]
results = minimize(
value_and_grad(negative_log_likelihood), # see autograd docs.
x0 = np.array([1, 0, -1, 0, 0.5]), # initial value
args=(obs,),
jac=True,
bounds=(
(0, None), # mu1 (you mentioned the constraints on the means)
(None, None), # log_sigma1 is unbounded
(None, 0), # mu2 (you mentioned the constraints on the means)
(None, None), # log_sigma2 is unbounded
(0, 1) # the weight param should be between 0 and 1
)
)
print(results.x)
output:
[ 2.03682793 -0.01359715 -0.7450283 -0.67540341 0.28439886]