Background
I am working with posterior probability distributions for parameters obtained from a Bayesian binomial generalised linear model with a logit link function. The parameters returned by the model are the log-odds intercept (a) and slope (b, also known as the logistic rate k). The logistic equation for these models can thus be written as $f(x) = \frac{1}{1 + e^{kx + a}}$ or $f(x) = \frac{1}{1 + e^{k(x + µ)}}$ where µ = $\frac{a}{k}$ is the inflection point of the sigmoid. I prefer the second form because µ is often more meaningful than a.
Specific problem
Rather than just working with central tendencies, I would like to estimate the entire probability distribution of µ to calculate probability intervals etc. I tried doing this by dividing the posterior a by the posterior k. However, the result is a strange angular distribution, the mean of which is not at all similar to the quotient of the means of a and k. Here is a MRE in R:
a <- rnorm(1e4, 3, 1)
k <- rnorm(1e4, -0.2, 0.1)
µ <- a/k
mean(µ)-mean(a)/mean(k) # means are very different
require(ggplot2)
ggplot() + geom_density(aes(a))
ggplot() + geom_density(aes(k)) # distributions for a and k look fine
ggplot() + geom_density(aes(µ)) + coord_cartesian(xlim = c(-300, 400))
distribution for µ is angular
I know for a fact that the mean of µ estimated as the quotient of the means of a and k is correct while the mean of µ estimated as the mean of the quotient of a and k is incorrect, since inserting the former into $f(x) = \frac{1}{1 + e^{k(x + µ)}}$ corresponds with the model prediction in probability space (p) derived from the posteriors a and k.
Question
Why is the quotient of two posteriors angular and leads to wrong inference? How can the mean of a quotient distribution be different to the quotient of the means of the dividend and divisor distributions? Would specifying µ rather than a as a parameter in the model make any difference?
