3

I have a set of measurements from hypothesized binomial distributions with the same p but with different n values. Given the data, I'd like to assess how confident I should be that the p value is indeed the same for all measurements.

If it helps, the actual data is

  • 314 successes out of 5334 trials
  • 613 successes out of 8219 trials
  • 923 successes out of 9785 trials
  • 822 successes out of 8387 trials
  • 461 successes out of 7628 trials
  • 360 successes out of 4787 trials
  • 454 successes out of 8007 trials
  • 636 successes out of 8258 trials
  • 526 successes out of 9313 trials
  • Pearson Chi-square test on 9 x 2 contingency table. Of course the best one is Fisher exact test, but it will take long time (days?) – user158565 Jun 03 '17 at 20:13

1 Answers1

2

You can't tell they are the same, only sometimes that they're different. (What if all the p's differ by miniscule amounts?)

Failure to reject doesn't mean the null is exactly true.

That said, those are obviously not a consistent set of proportions, you can tell just by looking at the numbers - the $x$'s (counts) are many $\sqrt{x}$'s apart (which for small proportions of large total counts will be relatively close to the standard deviation of the expected count).

If you must do a test anyway, I'd suggest a chi-squared test or a G-test for homogeneity of proportions - effectively a test of independence (they'll give very similar results), but personally I'd just display the proportions $\pm$ one standard error (for which a few instances of no overlap would be a fairly good indication of differences; this will obviously be what happens):

Plot of sample proportions plus and minus one standard error

... and we see that's the case. They're fairly similar (all in the same ballpark) but they're not consistent with equality of proportions

Glen_b
  • 282,281