8

The question:

$X_n\stackrel{d}{\rightarrow}X$ and $Y_n\stackrel{d}{\rightarrow}Y \stackrel{?}{\implies} X_n+Y_n\stackrel{d}{\rightarrow}X+Y$

I know that this does not hold in general; Slutsky's theorem only applies when one or both of the convergences is in probability.

However, are there instances in which it does hold?

For instance, if the sequences $X_n$ and $Y_n$ are independent.

mai
  • 797

3 Answers3

5

Yes, independence is sufficient: The antecedent conditions here concern convergence in distribution for the marginal distributions of $\{ X_n \}$ and $\{ Y_n \}$. The reason that the implication does not hold generally is that there is nothing in the antecedent conditions that deals with the statistical dependence between the elements of the two sequences. If you were to impose independence of the sequences then that would be sufficient to ensure convergence in distribution of the sum.

(Alecos has added an excellent answer below that proves this result using characteristic functions. Asymptotic independence is also sufficient for this implication, since the same limiting decomposition of the characteristic functions occurs.)

Ben
  • 124,856
  • 1
    Independence of the sequences may not be sufficient. You also need independence of the limiting $X$ and $Y$. If the sequences are independent but $X = -Y$ you are cooked. – guy Mar 09 '19 at 17:13
  • @guy: I think you'll find (see Alecos answer below) that the independence of the limiting sequences, plus convergence of the sequences, implies that you cannot have that latter outcome. – Ben Mar 09 '19 at 22:34
  • 1
    The conclusion that $\varphi_X \cdot \varphi_Y$ is the cdf of $X + Y$ In @Alecos answer relies on the fact that $X$ and $Y$ are independent. So it requires $X$ and $Y$ to be independent, if the mode of convergence is $\stackrel{d}{\to}$. Suppose $X_n$ and $Y_n$ are iid $N(0,1)$, then $X_n \stackrel d \to X_1$ and $Y_n \stackrel d \to -X_1$, but $X_n + Y_n \stackrel d \to N(0,2)$ while $X + Y = 0$. – guy Mar 09 '19 at 23:02
  • Okay, I think I see what you're saying. It appears that the OP has not specified that $X$ and $Y$ are to be taken to be independent in his conclusion "$\overset{d}{\rightarrow} X+Y$". I presume that was his intention. You're right that this needs to be specified. – Ben Mar 09 '19 at 23:07
  • @guy If $X_n$ and $Y_n$ are as you describe in your comment, it appears that their limits remain two independent $N(0,1)$, since convergence here is trivial (does not depend on $n$). I don't get how $X_1$ emerges. – Alecos Papadopoulos Mar 09 '19 at 23:14
  • 1
    @Alecos If you agree that they converge to a $N(0,1)$ then you trivially agree that they both converge in distribution to $X_1$ by definition. They also both converge in distribution to $-X_1$, and to all other $N(0,1)$ random variables. Convergence in distribution is not like other modes of convergence, you can converge in distribution to many different random variables; the limiting random variable does not even need to be defined on the same probability space. The only thing which is unique is the marginal distribution. – guy Mar 09 '19 at 23:56
  • 1
    @Alecos put another way, note that the distribution of $X+Y$ is not even well defined just by talking about the sequences being independent. You can have $X_n \to X$ and $Y_n \to Y$ without making any assumption at all about the dependence structure of $X$ and $Y$, even if you make strong assumptions about the dependence of $X_n$ and $Y_n$. All we’ve done is pinned down the marginals of $X$ and $Y$. – guy Mar 10 '19 at 00:03
  • @guy Certainly. What I really wonder about is the following: If we have two sequences ${X_1,...,X_n}$ and ${Y_1,...,Y_n}$, where the marginals depend on the index, and a) we assume that the pairs $(X_i,Y_i)$ are independent, and b) we assume that each sequence converges to $X$ and $Y$ respectively, is it possible that these limiting random variables are dependent? How could we lose independence at the limit? – Alecos Papadopoulos Mar 10 '19 at 01:10
  • @Alecos My example is of that nature :) – guy Mar 10 '19 at 01:52
  • @guy Ok, I' ll try to understand it, but I don't see it yet. At the moment, what I am thinking is that converging to random variables that have the same distribution does not mean that they converge to the same random variable. – Alecos Papadopoulos Mar 10 '19 at 01:55
  • @Alecos the definition of convergence in distribution of random variables is that $X_n \to X$ is defined to mean that $F_n(x) \to F(x)$ (at the continuity points) where $F_n$ is the cdf of $X_n$ and $F$ is the cdf of $X$. Hence, if $X_n \to Z$ where $Z \sim N(0,1)$ then by definition you also have $X_n \to -Z$. If ${X_n}$ and ${Y_n}$ are iid $N(0,1)$ but $X_n + Y_n \to Z + (-Z) = 0$ is false. – guy Mar 10 '19 at 05:03
  • @guy Thanks for insisting, I understand now: it is a statement about convergence of the marginal distribution function, which will be the same for identically distributed RVs irrespective of their probabilistic relation. It is a statement of what is the distribution of the limit, not what is the limiting RV. I have posted a question inspired by our chat here, https://stats.stackexchange.com/q/396617/28746 . Thanks! – Alecos Papadopoulos Mar 10 '19 at 11:13
  • @Ben You may be interested in this thread, https://stats.stackexchange.com/q/396617/28746 – Alecos Papadopoulos Mar 10 '19 at 12:41
  • @guy Your reasoning makes sense, yet the conclusion in your example intuitively seems odd. You are saying that if I have two sequences of iid standard normal random variables, they necessarily converge in distribution to the constant zero? – mai Mar 10 '19 at 17:20
  • 1
    Oh; I think I understand. You are saying that I need an added condition on the independence of $X$ and $Y$ for my original statement in the question to hold. Please let me know if I understand correctly. – mai Mar 10 '19 at 17:31
  • 2
    @mikario The point is that they don't converge to $0$, which shows that the condition that ${X_n}$ and ${Y_n}$ are independent is not sufficient to guarantee that $X_n + Y_n \to X + Y$; you also need that $X$ and $Y$ are independent. – guy Mar 10 '19 at 20:02
  • @guy Ok, I get it. Thanks for the insightful comments :) – mai Mar 10 '19 at 20:56
5

The Cramer-Wold theorem gives a necessary and sufficient condition:

Let $\{z_n\}$ be a sequence of $R^K$-valued random variables. Then, $$ z_n \to_d z\;\Longleftrightarrow\;\lambda'z_n\to_d \lambda'z\quad\forall\quad \lambda\in R^K\backslash\{0\} $$

To give an example, let $U\sim N(0,1)$ and define $W_n:=U$ as well as $V_n:=(-1)^nU$. We then trivially have $$W_n\to_d U$$ and, due to symmetry of the standard normal distribution, that $$V_n\to_d U.$$ However, $W_n+V_n$ does not converge in distribution, as $$ W_n+V_n=\begin{cases}2U\sim N(0,4)&\text{for}\;n\;\text{even}\\ 0&\text{for}\;n\;\text{odd}\end{cases} $$ This is an application of the Cramer-Wold Device for $\lambda=(1,\;1)'$.

5

Formalizing @Ben answer, independence is almost a sufficient condition, because we know that the characteristic function of the sum of two independent RV's is the product of their marginal characteristic functions. Let $$Z_n = X_n + Y_n$$. Under independence of $X_n$ and $Y_n$,

$$\phi_{Z_n}(t) = \phi_{X_n}(t)\phi_{Y_n}(t)$$

So

$$\lim \phi_{Z_n}(t) =\lim \Big [\phi_{X_n}(t)\phi_{Y_n}(t)\Big]$$

and we have (since we assume that $X_n$ and $Y_n$ converge)

$$\lim \Big [\phi_{X_n}(t)\phi_{Y_n}(t)\Big] = \lim \phi_{X_n}(t)\cdot \lim \phi_{Y_n}(t) = \phi_{X}(t)\cdot \phi_{Y}(t) $$

which is the characteristic function of $X+Y$... if $X+Y$ are independent. And they will be independent if one of the two has a continuous distribution function (see this post). This is the condition required in addition to independence of the sequences, so that independence is preserved at the limit.

Without independence we would have

$$\phi_{Z_n}(t) \neq \phi_{X_n}(t)\phi_{Y_n}(t)$$

and no general assertion can be made about the limit.

  • Great answer (+1). I think with this method it's also worth noting that the weaker assumption $\lim \phi_{Z_n} = \lim \phi_{X_n} \phi_{Y_n}$ (asymptotic independence) goes directly to your second step and so also gives you the result. This shows that asymptotic independence is sufficient for the desired property. – Ben Mar 09 '19 at 22:37