I suggest that perhaps you should adopt a different approach for this problem, since you seem to be interested in not only estimating the effectiveness of each salesperson but also the precision with which that effectiveness has been measured. The logistic regression framework suits this problem well, and you can use frequentist or Bayesian methods to conduct it.
What you will want to do is use salesperson as a factor variable ($m$ is the number of salespersons) and estimate a coefficient for each individual salesperson.
\begin{align}
Y_{i} &\sim Bernoulli( p_{i}) \\
\log{\left(\frac{p_{i}}{1-p_{i}}\right)} &= \mu + \beta_1 x_{i1} + \beta_2 x_{i2} + \ldots + \beta_m x_{im}
\end{align}
In this setup $Y_{i}$ is the random variable reflecting whether customer $i$ makes a purchase, $p_{i}$ is the probability that customer $i$ makes the purchase, $\mu$ is the average probability that a customer makes a purchase, $\beta_j$ is a performance coefficient for salesperson $j$ and $x_{ij}$ is $1$ if salesperson $j$ attempted to sell to customer $i$ and $0$ otherwise. Since $x_{ij}$ are actually colinear if all used you will actually want to omit one of them (example choices are $1$, $m$ or whichever has most sales) and then the values of $\beta_j$ will give sales performance versus your omitted salesperson.
If you estimate this in a frequentist statistics package you will get SE and confidence intervals for the parameters $\mu$ and $\beta_j$. In a Bayesian package you will need to specify vague priors for these and then you will get a posterior distribution from which you can get SD and credible intervals. In a simple application there is not much to be gained from either approach, except perhaps that in a Bayesian package it might be easier to compute rank probabilities (i.e., the probability that salesperson $j$ is the best, second-best, third-best, etc.).
Things get more complicated if you have multiple attempts to sell to the same customer (potentially with multiple visits from multiple salespersons), in which case you will need to look at a mixed model approach.
Here are example formulae in Stata and R based on a toy dataset with four salespersons (Alice has 3 sales out of 7 customers, Bob 2 out of 5, Charles 1 out of 2, Danny 2 out of 6)...
In Stata:
. logistic sale i.salesperson_id
Logistic regression Number of obs = 20
LR chi2(3) = 0.22
Prob > chi2 = 0.9745
Log likelihood = -13.350794 Pseudo R2 = 0.0081
--------------------------------------------------------------------------------
sale | Odds Ratio Std. Err. z P>|z| [95% Conf. Interval]
---------------+----------------------------------------------------------------
salesperson_id |
Bob | .8888889 1.057989 -0.10 0.921 .0862412 9.161782
Charles | 1.333333 2.143034 0.18 0.858 .0571247 31.12102
Danny | .6666667 .7698004 -0.35 0.725 .0693467 6.40902
|
_cons | .75 .572822 -0.38 0.706 .1678593 3.351021
--------------------------------------------------------------------------------
In R:
> glm.out <- glm(sale ~ salesperson, family=binomial, data=df)
> summary(glm.out)
Call:
glm(formula = sale ~ salesperson, family = binomial, data = df)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.1774 -1.0226 -0.9005 1.3018 1.4823
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.2877 0.7638 -0.377 0.706
salespersonBob -0.1178 1.1902 -0.099 0.921
salespersonCharles 0.2877 1.6073 0.179 0.858
salespersonDanny -0.4055 1.1547 -0.351 0.725
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 26.920 on 19 degrees of freedom
Residual deviance: 26.702 on 16 degrees of freedom
AIC: 34.702
Number of Fisher Scoring iterations: 4
Note that the estimates from Stata are in the form of odds ratios (use logit instead of logistic to change this) which are exponentiated versions of the R output. You can confirm for example that for Bob, Stata gives an odds ratio of 0.8888889 which is equal to exp(-0.1178) from R (up to rounding error).
You will also see that in both Stata and R the precision of the estimate is less (i.e., SE is greater) for Charles (only 2 customers) than for Bob and Danny.
Hope this helps!