5

I googled "uniform prior" and got a link to Prior probability, which uses the term without an explanation or a definition.

anther link is to quora, which does not give an concrete example.

Can anyone give a concrete example, such as coin flipping or dice tossing, to illustrate the uniform prior?

czlsws
  • 556

3 Answers3

4

Let's say you don't know the probability of head, $p$, of a coin. You decide to conduct an experiment to estimate what it is, via Bayesian analysis. It requires you to choose a prior, and in general you're free to choose one of the feasible ones. If you don't know or don't want to assume anything about this $p$, you can say that it is uniformly distributed in $[0,1]$, in which $f_P(p)=1, 0\leq p\leq 1$, $0$ otherwise. This is quite similar to saying that any $p$ value in $[0,1]$ is equally likely. This prior distribution is a uniform prior.

You can also choose other priors, that focus on different regions in $[0,1]$, for example, if you choose a prior like $f_P(p)=\frac{3}{2}(1-(1-2p)^2),\ \ \ 0\leq p\leq 1$, you'll assume that $p$ is more likely to be around $0.5$ compared to edge cases, such as $p=0,p=1$.

gunes
  • 57,205
  • Thank you so much! Does the function $f_P(p)$ represent the random variables, Bernoulli random variable in this case? – czlsws Jul 28 '19 at 23:02
  • No, Bernoulli RV is the outcome of a coin flip here. $f_P(p)$ represents the density of $P$, which is the prob. of heads here. But, it is used as a parameter in Bernoulli RV. – gunes Jul 29 '19 at 03:55
3

The notion of uniform prior understood as a prior with a constant density $\pi(\theta)=c$ is not well-defined (or even meaningful) as it depends on both

  1. the dominating measure that determines the density function of the prior (i.e., one measures volume);
  2. the parameterisation $\theta$ of the sampling model $f(x|\theta)$ for which the prior is constructed, e.g. variance versus precision.

If either entry is modified, the density of the prior changes as well and stops being constant.

In the earlier answer,

  1. the dominating measure may be the Lebesgue measure, $\text{d}p$, constant over the unit interval, or it may be the Haldane measure, $[p(1-p)]^{-1}\text{d}p$, which explodes to infinity in zero and one. For this latter measure, there is no possible uniform prior, as the measure is $\sigma$-finite, i.e., does not have a finite mass and
  2. the Bernoulli model can be parameterised in $p$, in $q=\sqrt{p}$, or in $r=\log(p)$. A uniform prior on $p$ does not induce a uniform prior on $q$ (or conversely) and there is no possible prior on $r$, which varies on $(-\infty,0)$.
Xi'an
  • 105,342
  • 3
    +1. Another way to express this, perhaps a little more forcibly, is that all (absolutely continuous) priors are uniform. – whuber Jul 28 '19 at 16:53
0

When the prior distribution $\pi$, of the parameter $\theta$ to be estimated is the Uniform distribution, i.e. $\pi(\theta)\sim U(a,b)$, we refer to prior $\pi$ as a uniform or uninformative prior. I'm not sure what's not to understand here except the basics of Bayesian inference and the Uniform distribution.

The best way to understand the uniform distribution is via a Monte Carlo sample of fair die rolls. The probability of every outcome $P(X=x_i),~ i=1,...,6$ is equal to $1/6$ and the histogram of the empirical distribution should be approximating a horizontal line (i.e. is uniform).

Digio
  • 2,557