0

If I take a 95% confidence interval $I_k$, then it means that if I sample 100 confidence intervals $\{I_1, \cdots, I_{100}\}$, then it is expected that 95 of $I_k$'s contain the true parameter. Of course, our true parameter value $p$ is fixed and would not move no matter whats, so it is our job to estimate what the value of $p$ is. Hence, given a confidence interval $I_k$, the probability that $p \in I_k$ is either 0 or 1.

In practice however, is it wrong to say that "with 95% probability, parameter $p \in I_k$"? Following the mind of frequentists, if I pick a confidence interval, then there is a 0.95 chance that this interval contains $p$. Therefore, it is not incorrect to claim the taboo word: the probability of CI containing true parameter $p$ is 0.95. Is there a latent error that will cause a serious problem with this approach?

James C
  • 141
  • 1
    You don't have to fix $p.$ The chance that an exact 95% CI covers its parameter is 95% by definition. You can apply this to all the CIs you ever compute in your lifetime even though they might all be in different circumstances. There's a risk of getting confused by fuzzy language. Some people find it helpful to refer to a confidence interval procedure to avoid the mistakes arising from overloading the meaning of "confidence interval" (as referring both to the procedure and its outcome). – whuber Jan 16 '23 at 20:15
  • @whuber Thank you. I have another question. If I generate 100 of 95% confidence intervals, then each $I_k$'s would have different interval lengths. Expected number of CI's containing true parameter -- which is 95 -- does not change, right? – James C Jan 16 '23 at 21:00
  • Right. That's because the event "this CI contains the parameter" always has a chance of 95% to occur, by construction, and therefore is perfectly modeled by the flip of an unfair coin with a 95% chance of heads. In other words, the laws of probability don't care about the inner details of how you determine a binary outcome. – whuber Jan 16 '23 at 22:35

0 Answers0