2

In section 6.1 of the notes Stat 3701 Lecture Notes: Bayesian Inference via Markov Chain Monte Carlo (MCMC) by Charles J. Geyer, the author states

Suppose we have a probability or expectation we want to estimate. Probability is a special case of expectation: if $g$ is a zero-or-one valued function, then $$ E\{ g(X) \} = \Pr\{ g(X) = 1 \} $$ and any probability can be written this way. So we just consider expectations.

I would assume that $g$ in this context is the function $$ g(X) = \begin{cases} 1 & \ \text{if} \ \ X \in A \\ 0 & \ \text{if} \ \ X \notin A \end{cases} $$ such that $$ \Pr\{ g(X) = 1 \} = \Pr\{ X \in A \} $$ where $A$ is some subset of the range (image) of $X$. Is this true for any random variable $X$?

mhdadk
  • 4,940

2 Answers2

4

Yes, you are correct. You can use indicator function to define a random variable, say $Y = I_{X\in A}(x)$ and then $Y$ would follow Bernoulli distributon with the "probability of success" $p = \Pr(Y=1) = \Pr(X \in A)$. For Bernoulli distribution, expected value is equal to the probability of success $E[Y] = 1 \times p + 0 \times (1-p) = p$.

Tim
  • 138,066
  • 5
    Peter Whittle (1927-2021) made this a main theme of his engaging text which passed through two titles, three publishers and four editions between 1970 and 2000. Probability via Expectation was the later title. https://link.springer.com/book/10.1007/978-1-4612-0509-8 – Nick Cox Dec 30 '21 at 13:22
  • 2
    @NickCox it turns out that another book adopts this approach as well: A User's Guide to Measure Theoretic Probability by David Pollard (2001). Not only that, but Pollard even cites excerpts from Whittle's book to motivate this approach. Pollard calls this notation "de Finneti's notation". – mhdadk Nov 20 '23 at 21:45
2

The expectation is a Lebesgue integral with respect to some probability measure $\mu$ in $(\Omega,\mathcal{F},\mu)$, $$\int f d\mu$$ where $f$ is a $\mathcal{F}$-measurable function, which in probability are our random variables $X$.

By construction, the Lebesgue integral of an indicator function is the measure of the set associated with that indicator, $$ \int I_A d\mu = \mu(A), $$ thus under $(\Omega,\mathcal{F},\mathbb{P})$, $$\mathbb{E}[I_A]=P(X\in A) = \mathbb{P}(X^{-1}(A))$$.