0

Please check that my understanding of hypothesis testing, confidence intervals, and their relation to the prior on population mean $\mu$ is correct.

Let $X_i\sim N(\mu, \sigma^2)$ be IID samples for $i = 1, ..., n$, (or $X_i$ any iid random variables with mean, variance $\mu, \sigma^2$). Suppose we don't know $\mu$ but we do have $\sigma$, and we want to reason about $\mu$.

We know that $\bar{X} = \sum_i X_i/n$, the sample mean follows $N(\mu, \sigma^2/n)$ (or in the non-normal $X_i$ case, that for large $n$ it does approximately).

For hypothesis testing, we are using this fact to make a statement about $\bar{X}$ conditioned on $\mu$ and $\sigma$. Suppose a null hypothesis, say $\mu = 0$. Then we know $P(\bar{X}|\mu,\sigma)$ is $N(\mu, \sigma^2)$, so if $\bar{X}$ lies in the tails, we reject the assumption. The point is that the reasoning for hypothesis testing follows from the distribution on $\bar{X}|\mu, \sigma$.

For confidence intervals on $\mu$, the situation is flipped with Bayes' rule, and requires a prior $P(\mu|\sigma)$ to be specified up to a constant. To see this, first recall the process: e.g., we are 95% confident that $\mu\in [\bar{X} - \beta\sqrt{ \sigma/n}, \bar{X} + \beta\sqrt{ \sigma/n}]$ for $\beta = F^{-1}( (1-.95)/2) \approxeq 1.96$. So it seems to me that we are finding probabilities of $\mu$ conditioned on $\bar{X}, \sigma$, a flip of they hypothesis testing framework with Bayes' Rule.

To write this rigorously, $P(\mu | \bar{X}, \sigma) \propto P(\bar{X}|\mu, \sigma) P(\mu|\sigma)$. Since $P(\bar{X}|\mu, \sigma)$ is $ N(\mu, \sigma^2/n) $, the confidence interval formulation requires that prior $P(\mu |\sigma)$ is 1 (an improper prior).

Is this a correct formulation of the math underpinning confidence intervals of the population mean under normality assumptions?


Related question: Trouble relating the Central Limit Theorem to confidence intervals

Answer in the related questions shows a formulation of CI that is only based on the normal distribution of $\bar{X}$, not using prior. This is worth understanding, but doesn't answer the question above.

travelingbones
  • 441
  • 2
  • 12
  • 2
    If I'm not mistaken in Bayesian framework we do not have confidence intervals and z-scores. We have credible intervals and Bayes Factors. – Fiodor1234 Jun 02 '23 at 16:41
  • Answer is here: https://stats.stackexchange.com/questions/14721/how-is-the-bayesian-framework-better-in-interpretation-when-we-usually-use-uninf – travelingbones Jun 06 '23 at 22:43

0 Answers0