5

Say $X_k$ is a non-negative sequence and it is known that it convergences in mean to zero. It feels like it should also convergence almost surely due to the fact that the only value a non-negative random variable can take and still average to zero is the value zero.

However, this explanation is kind of hand-wavy. Can this be proven more rigorously or disproven?

2 Answers2

7

Unless I'm making an elementary mistake (entirely possible!), this does not hold, even for discrete random variables with finite support (contrary to another answer). Recall the second Borel-Cantelli Lemma

Second Borel Cantelli Lemma: Let $A_1, A_2, \ldots$ be independent events. If $\sum_{i = 1} ^ \infty P(A_i) = \infty$ then $P(A_i \mbox{ occurs infinitely often}) = 1$.

Let $X_k$ be independent, such that $P(X_k = 1) = 1/k$ and $P(X_k = 0) = 1 - 1/k$. $E|X_k| \to 0$ so $X_k$ converges to $0$ in mean, but $\sum_k P(X_k = 1) = \infty$, and by independence the events $[X_k = 1]$ are independent. Hence with probability $1$, $X_k = 1$ occurs infinitely often; obviously for any $\omega$ such that $X_k(\omega) = 1$ infinitely often we cannot have $X_k(\omega) \to 0$, so the sequence almost surely does not converge to $0$.

guy
  • 8,892
3

I thank user @guy for pointing out the mistake of my previous attempt. Instead of just deleting it, I will insert in its place a naive way to showcase what the 2nd Borel-Cantelli lemma tells us, using the case that @guy considers, a sequence of independent Bernoullis$(1/k)$.

Consider the event $\{\prod_{i=m}^k X_i = 0\}$. Since the r.v.'s are independent the probability of this event is the product of the probabilities of the individual events

$$P\left(\prod_{i=m}^k X_i = 0\right) = \prod_{i=m}^kP(X_i=0)$$

$$=\left(1-\frac 1m\right)\cdot\left(1-\frac 1{m+1}\right)\cdot...\cdot \left(1-\frac 1{k-1}\right)\cdot\left(1-\frac 1{k}\right)$$

$$=\left(\frac {m-1}m\right)\cdot\left(\frac {m}{m+1}\right)\cdot...\cdot\left(\frac {k-2}{k-1}\right)\cdot\left(\frac {k-1}{k}\right) $$

$$=\frac {m-1}{k}$$

As $k\rightarrow \infty$ this probability goes to $0$. But this probability is the probability of a subsequence to take on only the value zero. So we just concluded that the probability of an infinite subsequence of zeros is zero no matter where we start this subsequence (i.e. no matter what or how large the value of $m$ is). So the sequence does not converge to zero almost surely, even though it converges in mean to zero (and hence also in probability).

What about some intuition here? I would try this: while the single probability $P(X_k=1)$ tends to zero (and so the expected value of $X_k$ goes to zero too), it does not go "fast enough". So when looking at a whole sequence of $X_k$'s we cannot say that the sequence will converge to zero with probability one.

  • 2
    Let $X_k = 1$ w.p. $1/k$ and $0$ otherwise, with the $X_k$ independent. By the second Borel-Cantelli lemma, $\sum_k P(X_k = 1) = \infty$ so with probability $1$, $X_k = 1$ infinitely often, hence $X_k$ does not tend to $0$ almost surely. But $E|X_k| = 1/k \to 0$. Does this not contradict your result? – guy Oct 27 '14 at 19:54
  • @guy Ah, the harmonic series, that gave Leibniz such torment! Most probably. Let me just review the lot -but I suspect I should delete this. Thanks for the contribution! – Alecos Papadopoulos Oct 27 '14 at 20:16