3

Given two random variables $X_1$ and $X_2$ (same sample space $\mathcal{X}$) that

$$\mathbb{E}[X_1]=\int_{\mathcal{X}}xf_1(x)dx > \mathbb{E}[X_2]=\int_{\mathcal{X}}x f_2(x)dx$$

Can we say that $\mathbb{E}[g(X_1)] < \mathbb{E}[g(X_2)]$ given function $g(\cdot)$ is decreasing?

I got stuck immediately after the law of unconscious statisticians $\mathbb{E}[g(X)] = \int_{\mathcal{X}}g(x) f_X(x)dx$ and don't know how to proceed.

Maybe this is a stupid question? because if $X_1 \sim \mathcal{N(\mu_1,\sigma_1^2)}$ and $X_2 \sim \mathcal{N(\mu_2,\sigma_2^2)}$ and $\mu_1 > \mu_2$ but we can choose the variances so that that the means of log normal $-\log(X_1)$ and $-\log(X_2)$, being $-\exp(\mu_1+0.5\sigma_1)$ and $-\exp(\mu_2+0.5\sigma_2)$, have any relation.

jbowman
  • 38,614
Rokai
  • 31
  • Perhaps you can check stochastic ordering in that for an increasing function $h, ~X\prec Y\implies \mathbf E[h(X) ]\leq \mathbf E[h(Y) ].$ – User1865345 Mar 17 '24 at 18:38
  • Your example does not work because $X_1$ cam take negative values (check the def of lognormal distribution again). – Zhanxiong Mar 17 '24 at 19:18
  • Do you really mean "$E[g(X_1)] \color{red}{>} E[g(X_2)]$ given function $g(\cdot)$ is decreasing"? Otherwise just take $g(x) = -x$, then you easily meet the "$<$" requirement. – Zhanxiong Mar 18 '24 at 02:40

2 Answers2

7

For the reason I stated in my comment, I assume what you are really interested is to find an example such that $E[g(X_1)] > E[g(X_2)]$ for some decreasing function $g$, and random variables $X_1$, $X_2$ with $\mu_1 = E[X_1] > \mu_2 = E[X_2]$.

It is very easy to construct such an example using binary random variables -- the idea is that you can tweak the probability mass at $0$ and the range of the other non-zero value and the "speed" how $g$ decreases.

For instance, let $X_1$ follow the distribution $P[X_1 = 0] = 0.9, P[X_1 = 10000] = 0.1$ and let $X_2$ follow the distribution $P[X_2 = 0] = P[X_2 = 900] = 0.5$. Clearly $\mu_1 = 1000 > \mu_2 = 450$. Now take $g(x) = -\sqrt{x}$, $x \geq 0$, then $g$ is decreasing on its domain, and \begin{align*} E[g(X_1)] = -100 \times 0.1 = -10 > E[g(X_2)] = -30 \times 0.5 = -15. \end{align*}

After some modification, the example you proposed can also serve as a counterexample. Instead of using $g(x) = -\log(x)$ (which is undefined if $X_i$ is negative), try using $g(x) = -e^x$. By definition, if $X_i \sim N(\mu_i, \sigma_i^2)$, then $-g(X_i) = e^{X_i} \sim \text{Lognormal}(\mu_i, \sigma_i^2)$, whence $E[g(X_i)] = -\exp(\mu_i + \sigma_i^2/2)$. Therefore, even when $\mu_1 > \mu_2$, if $\sigma_2^2$ is significantly greater than $\sigma_1^2$, one clearly would see $E[g(X_1)] > E[g(X_2)]$.

In conclusion, examples that meet your constraint are fairly trivial to construct. As @User1865345 mentioned in the comment, a more interesting problem is how the stochastic ordering between two random variables is preserved by expectation: that is, $P[X_1 \leq x] \leq P[X_2 \leq x]$ for all $x$ implies that for any nondecreasing function $g$, $E[g(X_1)] \leq E[g(X_2)]$. A proof of this result can be found in this answer.

Zhanxiong
  • 18,524
  • 1
  • 40
  • 73
  • 1
    The fact that expectation preserves the order is not only applicable for the class of nondecreasing functions, but (and that is more interesting) also for other general classes of functions too. Various papers on these were written in the mid 75s to early 80s. (And yes, +1.) – User1865345 Mar 18 '24 at 06:20
5

A counterexample: $\mathcal{X} = \{0,1,3\}$, $f_1(x) = \{0.5,0,0.5\}$, $f_2(x) = \{0,1,0\}$, and $g(x) = \{5,2,1\}$.

$$\mathbb{E}_1[X] = 1.5 > \mathbb{E}_2[X] = 1$$

$$\mathbb{E}_1[g(X)] = 3 > \mathbb{E}_2[g(X)] = 2$$

Clearly there is plenty of room for tweaking the example if you don't like the zero probabilities!

jbowman
  • 38,614