I know the size, empirical mean and empirical variance of two samples $X_1$ and $X_2$, but I don't know the values. How can I calculate the bounds of a confidence interval of the relative difference of the mean values $(m_1-m_2)/m_1$?
When I try to do a direct calculation, I have to calculate the variance of a ratio. One formula based on Taylor series is available in a online published paper but the corresponding first-order approximation is not accurate in my case because the variance are quite large compared to mean values.
I thought about an alternative consisting in considering the mean and variance of the reference sample $X_1$ as theoretical constant values instead of random variables. It is similar to compare $X_2$ to a theoretical distribution rather than comparing two samples. In that case, there is no ratio anymore but I am not sure that the approach makes sense.