For a given horse race, I have the predicted distributions of each horse running the race like below,
Image taken from http://www.cms.zju.edu.cn/UploadFiles/AttachFiles/20054191750380.ppt
I want to calculate what is the probability of each horse winning the race? based on the above normal distributions. And after estimating the same I want the probabilities to add up to 1.
The x-axis in the diagram above is the running time of horses. Hence the horse with the lowest time (left-most) could be a potential winner.
I know the formula that whenever we have two independent normal distributions, then P(X1 <= X2) i.e. the probability of point taken from distribution X1 will be less than or equal to point taken from distribution X2 can be given as,
ss.norm.cdf(-(mu1 - mu2) / np.sqrt(sigma1 + sigma2))
Proof:
import numpy as np
from scipy import stats as ss
mu1, mu2, sigma1, sigma2 = 0., 100., 1., 1.
X2 = np.random.normal(mu2, sigma1, size=(10000,))
assert(np.mean([np.mean(np.where(np.random.normal(mu1, sigma1) <= X2 , 1, 0)) for _ in range(10000)])) == ss.norm.cdf(-(mu1 - mu2) / np.sqrt(sigma1 + sigma2))
So how do I extend the above code to find the winning probabilities of every horse, and that they should add up to 1?

Rcode to illustrate the numerical integration solution. It is far more accurate and efficient than any simulation and works with any number of "horses." – whuber Jan 15 '21 at 22:06