1

Iam really trying to understand convergence in probability, but I have an example that I am struggling to understand. Perhaps pointing towards a deeper issue. Take the following simple case of convergence in probability to a constant using the exponential distribution:

I don't understand how this works. Suppose that we are doing an experiment of how much time passes until a new customer enters into a shop. If this is the case, then I don't get is how this sequence of random variables is defined on the same probability space. After all a random variable is a function from events to the real numbers, but here, to put it simply though inaccurately, the function itself does not change. That is, it is not as if different real values are assigned to various outcomes.

Instead, what changes is the rate at which events take place, or in other words it looks like the underlying probability measure changes and not the random variables themselves. If I have a probability measure in hand for various events, surely this means that the random variable that models this experiment must adopt the same rate which is determined by this measure? If this is the case, how can a sequence like this exist and be defined on the same probability space?

Reference: https://www.probabilitycourse.com/chapter7/7_2_5_convergence_in_probability.php

  • 1
    Please tell us what "$X_n$" is. Although you refer to "this sequence of random variables," you haven't told us anything about it! (Even if all the $X_n$ might share a common exponential distribution, that's insufficient -- and your limiting statement is definitely false in that case if they are independent.) – whuber Mar 30 '23 at 17:53
  • Sorry, I have added the original example and also where I got it from. Hope that makes things clearer! – DarkenExcalibur Mar 30 '23 at 18:46
  • Thank you: notice all the $X_n$ have different distributions. This suggests that you explore those distributions with an eye to studying how they change as $n$ increases. Graphing their CDFs is a way that universally works, because every random variable has a CDF. (Another hint: what is the CDF of "the zero random variable"?) For more, have you considered searching our site for similar questions? – whuber Mar 30 '23 at 20:18
  • I understand that as $n \to \infty$ the cdf of the each random variable looks more and more like a horizontal line (cdf of a constant). That is obvious, after all convergence in probability implies convergence in distribution. But that is not my problem. My issue concerns how it is that these random variable are defined on the same probability space? – DarkenExcalibur Mar 30 '23 at 20:29
  • Okay, I see now. At https://stats.stackexchange.com/a/352897/919 I give a universal construction for finite sequences. You likely can find for yourself how to extend it to countable sequences like this one. – whuber Mar 30 '23 at 20:33
  • My issue is actually a lot simpler than the one you discuss. I will start from afar, and hopefully that will help to me resolve the issue for myself. Please, tell me whether I am correct in the following: (1) If I have a sequence of random variables defined on the same space, then what changes across each $ n$ is the assignment of real numbers to different subsets of the sample space (2) A probability measure across those subsets is taken as given, and induces changes in the distribution of each random variable in the sequence? – DarkenExcalibur Mar 30 '23 at 20:58
  • I understand. Usually the probability measure is fixed and what changes are the random variables themselves: they are a sequence of functions defined on the space of outcomes. That corresponds to your statement (2) but doesn't seem to align with (1). I'm pretty sure you can find explicit examples in the "similar questions" link I gave above. – whuber Mar 30 '23 at 21:06
  • @DarkenExcalibur convergence in probability does not take account of the joint realization space of the sequence of RVs. It might help to check your understanding against almost sure convergence, the stronger form of convergence which does take account of the joint probability space of the sequence https://stats.stackexchange.com/questions/2230/convergence-in-probability-vs-almost-sure-convergence#:~:text=Convergence%20in%20probability%20says%20that%20the%20chance%20of,unlikely.%20Convergence%20almost%20surely%20is%20a%20bit%20stronger.. – AdamO Mar 30 '23 at 21:12
  • @whuber, with respect to what you said. I agree that it is a sequence of functions and the functions change. What worries me is that I don't see what changes in the exponential case. I can see "visually" that it is the $ \lambda$ parameter that changes with each $n$ but I don't understand what that corresponds to at the level of subsets of outcomes in the sample space. – DarkenExcalibur Mar 30 '23 at 21:17
  • @AdamO, I will look at that too. That is the next area of study of self-study for me. It feels like there is so much I have to learn! – DarkenExcalibur Mar 30 '23 at 21:18
  • Nothing changes at the subset (i.e., sigma-algebra) level! As a specific example of such a sequence, let $X$ be any random variable with an Exponential$(1)$ distribution and set $X_n=X/n.$ – whuber Mar 30 '23 at 21:21
  • Alright, I understand that too. That make sense. But to rephrase: what is it that changes? Suppose that I have the event: a customer arrives after 10 minutes: $ P(X>10)$. I know the probability of this event before defining any random variables. The given probability measure tells me what it is: it's fixed. But according to my sequence, this probability is meant to decline over $n$. XD – DarkenExcalibur Mar 30 '23 at 21:27
  • You're looking for a concrete application of a toy theoretical example. That might be frustrating. You will have better luck looking at sequences of estimators, because they do appear in practice: they show up as more and more data are collected, for instance. – whuber Mar 30 '23 at 21:29
  • Oh I agree with estimators, say like OLS or even the simple sample mean (via WLLN) this is simple and intuitive. It's just that in the above example, I don't get it. But my problem here is not with its concrete application. Its the theory here I don't get. $ (\Omega, F, P)$ implies that $P(X>10)= \bar{a}$. But then how can a sequence of random variables defined using that same probability space mean that this becomes zero? Or are you telling me that I don't get it because I am thinking in practical terms i.e. X = minutes/hours where sequences of random variables do not make much sense? – DarkenExcalibur Mar 30 '23 at 21:39
  • I'm lost here: you ultimately have to draw conclusions about sequences of probabilities. What the probability spaces might be is an irrelevant consideration and is not needed (or even scarcely useful for) intuition. Such problems come down to basic questions of ordinary Calculus, exactly as shown in the solution. – whuber Mar 31 '23 at 14:07
  • Let me give a simple example. Take a fair 3-sided die that is flipped once. Hence $ \Omega = {H,T,L}$. Suppose that $X_1$ is such that success is H,T and a failure is L, while $X_n$ for all $n >1$ is such that a success is T,L and a failure is H. Notice that we have a Bernoulli random variable with the probability of success $p$ being determined by (1) how the random variable defines a success and (2) the probability measure that's already there (2/3 since die is fair). My claim for the above is along similar lines. It appears to me as if the probability measure has to change with $n$? – DarkenExcalibur Mar 31 '23 at 14:47
  • This is fine in our exponential example because we are converging to a constant. In the Bernoulli example, the measure stays the same but the definitions change, and we converge to a new random variable. The reason I give for the measure changing in the exponential case is that "how minutes of waiting for a customer" are defined do not change, hence it must be that the probability measure changes. This is what leads to the expected value evolving as $n \to \infty$. – DarkenExcalibur Mar 31 '23 at 14:48

0 Answers0