2

Suppose we have that a random variable sequence $(X_n)_n$ converges in distribution to a law with mean $\bar{\mu}$ and variance $\bar{\sigma}^2$, or formally $X_n \stackrel{d}{\to} \mathcal{L}(\bar{\mu}, \bar{\sigma})$. Further assume the moments of the sequence and the limiting distribution are bounded: $\bar{\mu}, \bar{\sigma} < \infty$ and $\mu_n, \sigma_n < \infty$.

Do we have that $\mathrm{E}[X_n] = \mu_n \to \bar{\mu}$ and $\mathrm{Var}[X_n] = \sigma_n^2 \to \bar{\sigma}^2$ holds ?

My intuition is as follows: for $n$ large, we have that $X_n$ is approximately distributed as $ \mathcal{L}(\bar{\mu}, \bar{\sigma})$, so when we compute its expectation $\mu_n$ we should find approximately $\bar{\mu}$.

Is there a formal statement which corroborates or invalidates this intution of mine ?

NoVariation
  • 1,357
  • To help your intuition, let $X$ have the limiting distribution and let $Y$ have infinite expectation. Consider the sequence of variables $(1-1/n)X+(1/n)Y.$ – whuber Mar 04 '21 at 18:22
  • Assume the mean and variance are bounded, I'll update my question. – NoVariation Mar 05 '21 at 08:36
  • Your update doesn't work: all it says is to assume all variances are finite. As a counterexample, let $Y$ have unit variance and consider the sequence of mixture distributions of $X$ and $nY$ with weight $1/n$ on $nY.$ Each of these mixtures has finite variance but their variances diverge. To block that, you need to assume there exist (finite) numbers $N$ and $M$ for which $n\ge N$ implies $\sigma_n\le M.$ By varying this counterexample you can construct sequences $(X_n)$ where none of the moments converges yet $(X_n)$ converges in distribution. – whuber Mar 05 '21 at 14:37
  • 1
    Maybe this post will help. Apparently for convergence in distribution to imply convergence in moments you need uniform integrability. Maybe you can check this condition for your example. – PaulG Jan 16 '22 at 15:08

1 Answers1

0

For a counterexample let $(X_n)_{n \in \mathbb N_{\geq 1}} \mathrel{:=} (n \cdot \mathbf 1_{[0, n^{-2}]})_{n \in \mathbb N_{\geq 1}}$ be a sequence of random variables each defined on the probability space $\left([0, 1], \mathcal B(\mathbb R)\big|_{[0,1]}, \mathbb P \right)$ with $\mathbb P \mathrel{:=} \mathop{\mathrm{Unif}}([0,1])$.

Since, for all $\varepsilon \in \mathbb R_{>0}$,

$$ \mathbb P(|X_n - 0| > \varepsilon) = \mathbb P(X_n > \varepsilon) = \mathbb P(n \cdot \mathbf 1_{[0, n^{-2}]} > \varepsilon) = 1/n^2 \overset{n \to \infty}{\longrightarrow} 0 $$

we have $X_n \overset{p}{\to} 0$ and hence $X_n \overset{d}{\to} 0$, but

$$ \mathop{\mathbb E}\left[|X_n - 0|^r\right] = \mathop{\mathbb E}\left[X_n^r\right] = \mathop{\mathbb E}\left[n^r \cdot \mathbf 1_{[0, n^{-2}]}^r\right] = n^r \cdot 1/n^2 = n^{r-2} $$

for all $r \in \mathbb R_{\geq 1}$ and hence $\mathbb V[X_n] \overset{n \to \infty}{\longrightarrow} 1 \neq 0$.

statmerkur
  • 5,950