0

I know the size, empirical mean and empirical variance of two samples $X_1$ and $X_2$, but I don't know the values. How can I calculate the bounds of a confidence interval of the relative difference of the mean values $(m_1-m_2)/m_1$?

When I try to do a direct calculation, I have to calculate the variance of a ratio. One formula based on Taylor series is available in a online published paper but the corresponding first-order approximation is not accurate in my case because the variance are quite large compared to mean values.

I thought about an alternative consisting in considering the mean and variance of the reference sample $X_1$ as theoretical constant values instead of random variables. It is similar to compare $X_2$ to a theoretical distribution rather than comparing two samples. In that case, there is no ratio anymore but I am not sure that the approach makes sense.

  • If you need a number, rather than a formula, you might look into bootstrapping. –  Dec 15 '23 at 15:05
  • 2
    You need at least some measure of the dependence of the two samples. Or do you know that the samples are independent? –  Dec 15 '23 at 15:20
  • This should get you most of the way there. – dimitriy Dec 16 '23 at 02:29
  • 1
    @dimitriy The point of this question is that the Delta method doesn't apply. This problem is insoluble without making strong (parametric) distributional assumptions. – whuber Dec 16 '23 at 16:51
  • @mike For the Bootstrap, you need the individual values... – Michael M Dec 16 '23 at 17:11
  • 1
    @Michael You're right. But that's not necessarily the case for a parametric bootstrap. Regardless, because there's no evidence in the question that any parametric assumptions can be made and no evidence of any information about the joint distribution of the two means, one couldn't even get started. – whuber Dec 17 '23 at 16:41

0 Answers0