6

Let $X_n$ be a uniformly integrable (UI) sequence of random variables. If we have $$ X_n = \mu + O_p(n^{-1}), $$ then for $0 \le \delta < 1$ this implies $$ X_n = \mu + o_p(n^{-\delta}) \quad \quad \implies \quad \quad n^\delta(X_n - \mu) = o_p(1). $$ Since $X_n$ is UI and $n^\delta(X_n - \mu)$ converges in probability to zero it converges to zero in expecation and we get $$ E[n^\delta(X_n - \mu)] = o(1) \quad \quad \implies \quad \quad E[(X_n - \mu)] = o(n^{-\delta}). $$ So we have converted the convergence in probability to convergence in expectation: $$ X_n = \mu + O_p(n^{-1}) \quad \quad \text{to} \quad \quad E[X_n] = \mu + o(n^{-\delta}). $$ But we have lost some precision as we have moved from a Big $O_p$ with a specific $n^{-1}$ to a little $o$ with an $n^{- \delta}$ that is less than $1$ but can be arbitrarily close to it. I want to know is it possible to go from a Big $O_p$ to a Big $O$ and say: $$ X_n = \mu + O_p(n^{-1}) \quad \quad \text{to} \quad \quad E[X_n] = \mu + O(n^{-1}). $$

sonicboom
  • 930

1 Answers1

6

Here is a counterexample:

$P(X_n = 1) = \frac{1}{\sqrt{n}}$

$P(X_n = 0) = 1 - \frac{1}{\sqrt{n}}$

To show that $X_n = O_p(\frac{1}{n})$: given $\epsilon > 0$, let $M = N > \frac{1}{\epsilon^2}$.

Then for $n > N$, $P(n|X_n| > M) = P(|X_n| > \frac{M}{n}) = P(X_n = 1) = \frac{1}{\sqrt{n}} < \epsilon$ as required.

But $E(X_n) = \frac{1}{\sqrt{n}}$, which is not $O(\frac{1}{n})$.

fblundun
  • 3,959
  • That is not a uniformly integrable sequence. We can go from convergence in probability to convergence in expecation once we have uniform integrability. My question is can we go from Big Op in probability to Big O in expectation, because at the moment all I know is we can go from Big Op in probability to little o in expectation. – sonicboom Dec 16 '20 at 14:22
  • 3
    @sonicboom sorry, I didn't notice the uniform integrability condition. I've updated the answer with a uniformly integrable example. – fblundun Dec 16 '20 at 17:14
  • What is the source of the discrepancy with the OP's post which states that $X_n = \mu + O_p(n^{-1})$ should imply $E[X_n] = \mu + o_p(n^{-\delta})$ with any $\delta$ less than 1 and thus this should exclude $E[X_n] = n^{-0.5}$ – Sextus Empiricus Dec 16 '20 at 17:39
  • 1
    @SextusEmpiricus I believe it's that although $X_n$ is UI, $n^\delta (X_n - \mu)$ might not be UI (and in my example is not for $\delta = 0.5$), so might not converge in expectation to 0. – fblundun Dec 16 '20 at 18:24
  • So a requirement is that $n X_n$ is UI, and not just $X_n$. Would that be a sufficient condition? – Sextus Empiricus Dec 16 '20 at 19:30
  • @fblundun Thank you for the counterexample. The fact that you have shown Big $O_p$ doesn't imply Big $O$ is very interesting because it is in direct conflict with a statement I found in some lecture notes on asymptotic theory. I have asked a question about this conflict here – sonicboom Dec 17 '20 at 11:08
  • 2
    @SextusEmpiricus "$nX_n$ is UI" isn't a necessary condition for OP's conclusion (though it might be sufficient) - for example if $P(Y_n = 1) = n^{-1}$ and $P(Y_n = 0) = 1 - n^{-1}$ then $E(Y_n) = n^{-1}$, but $nY_n$ is not UI. Maybe the necessary condition is "whenever $f(n) = o(n)$, $f(n)X_n$ is UI"? – fblundun Dec 17 '20 at 12:09
  • 1
    @fblundun Your counterexample shows the statement doesn't hold in the general case. What about in the case where $X_n = \sum_{i=1}^n x_i/\sum_{i=1}^n x_i^2$ where ${x_i}{i=1}^n$ are iid with $E[x_i^k]= \mu_k < \infty$? Here $$ \begin{align} X_n &= (n\mu_1 + O_p(n^{1/2}))/(n\mu_2 + O_p(n^{1/2})) \ &= (O_p(n) + O_p(n^{1/2}))/(O_p(n) + O_p(n^{1/2})) \ &= O_p(n)/O_p(n) \ & = O_p(1). \end{align} $$ Since the ${x_i}{i=1}^n$ are iid the numerator and denominator are very well-behaved as $n$ increases, I think in this case $X_n = O_p(1) \implies E[X_n] = O(1)$. But I'm not sure. – sonicboom Dec 17 '20 at 18:29