What's the distribution of $\bar{X}^{-1}$ with X being a continuous iid random variable that is uniformly distributed? Can I use the CLT here?
-
- Uniform on what? 2) Just to double check - $\overline{X^{−1}}$ or $\bar{X}^{−1}$ ?
– Glen_b Apr 14 '13 at 05:56 -
- uniform on an interval of choice 2) The second.
– Majte Apr 14 '13 at 20:34 -
Are you sure you know exactly what the CLT says, and whether it is at all applicable to the sample mean $\bar{X}$? – Dilip Sarwate Apr 14 '13 at 22:05
-
1I am asking because I am not sure. If I were, I would not have asked – Majte Apr 14 '13 at 23:00
3 Answers
I'm going to assume you mean "uniformly continuous" and by your reference to the CLT I'm going to assume that you are asking for the asymptotic distribution of $(\bar X)^{-1}$ and also that $\mbox{Var}(X_1) =\sigma^2 < \infty$ with $X_1, X_2, ...$ being an iid sequence from some distribution.
If all of that was what you meant to suppose then you can use the delta method to get the asymptotic distribution of $(\bar X)^{-1}$. Let $g(\mu) = \frac 1 \mu$ where $E(X_1) = \mu \ne 0$. The delta method implies that $\sqrt n (g(\bar X) - g(\mu)) \to \mathcal N(0, g'(\mu)^2 \sigma^2)$ in distribution. So $\sqrt n (\bar X^{-1} - \mu^{-1})$ has an asymptotic $\mathcal N\left(0, \frac{\sigma^2}{\mu^4}\right)$ distribution.
Edit: Since I've received skepticism that this applies when $X$ is allowed to get close to $0$ (the delta method supposedly not applying) here is code that shows that it does, in fact, work when $X \sim \mathcal U(-2, 1)$. Obviously it won't work if $\mu = 0$ since then $g(\mu)$ is undefined. The delta method makes no moment assumptions with regard to $g(X)$; the only assumptions are that $g(\mu)$ exists and has nonvanishing derivative at $\mu$ and that $\sqrt n (\bar X - \mu)$ obeys a CLT. Maybe it is weird that $g(X)$ can have an asymptotic variance given that it doesn't have an existing variance, but such is life.
Z <- numeric(1000)
for(i in 1:1000) {
sigma.sq <- 3^2 / 12
mu <- -0.5
n <- 100000
X <- runif(n, -2, 1)
Y <- 1 / mean(X)
Z[i] <- sqrt(n) * (Y - 1 / mu) / sqrt(sigma.sq / mu^4)
}
hist(Z)
ks.test(Z, pnorm)
-
1I am pretty sure that when the OP said 'continuous uniform' he did NOT mean 'uniformly continuous' but that $X$ was from a continuous uniform distribution. – Glen_b Apr 15 '13 at 00:15
-
The delta method will only apply if the distribution of $X$ stays away from zero; the OP has since made it clear that it's not necessarily the case ('on an interval of choice'). – Glen_b Apr 15 '13 at 00:19
-
2@Glen_b Well, I specified that $\mu \ne 0$ and since $X$ is continuous $P(X = 0) = 0$ so I think that is enough. The delta method applies even when the mean of $X^{-1}$ does not exist. I was already pretty sure of this, but I simulated it anyways just to make sure and, indeed, the delta method works when we take $X \sim \mathcal U(-2, 1)$. – guy Apr 15 '13 at 01:28
-
2(+1) Indeed, no moment restrictions are needed and $\mu \neq 0$ is all that is required. For a general example showing that this is not all that strange, consider $Z \sim \mathcal N(0,1)$ and $Y$ standard Cauchy. Form $X_n = Z + n^{-1} Y$. Then $X_n$ converges to a standard normal, but it has no (central) moments of any order. For an introductory analysis analog, consider $x_n = 1/n$. Then, $(x_n)$ has the property that $x_n > 0$ for all $n$, but the limit does not satisfy this property. – cardinal Apr 15 '13 at 02:12
-
To clarify, the first example in my previous comment relates to convergence in distribution in general, not just as it relates to the delta method. (This is also what I meant when I alluded to "a general example".) – cardinal Apr 15 '13 at 02:33
-
1@cardinal with regards to the edit, I take (say) "$\bar X$ has an asymptotic $N\left(\mu, \frac{\sigma^2} n \right)$ distribution" to mean $\sqrt n(\bar X - \mu) / \sigma \to N(0, 1)$ in distribution. I don't mean $\bar X$ converges to something depending on $n$ since obviously $n$ went to infinity. It's probably better to avoid confusion since I guess this language isn't standard so your edit is good. – guy Apr 15 '13 at 04:28
-
The question was about the distribution of $\bar{X}$. For $X\sim \mathcal U(a,b)$, $\bar{X}$ takes on values in $(a,b)$ for all $n$. If $a < 0 < b$, the support of the density of $\bar{X}^{-1}$ is $(-\infty, a^{-1}) \cup (b^{-1}, \infty)$ for all $n$. So, what is the distribution of $\bar{X}^{-1}$ converging to? Specifically, for $\mathcal U(-2,1)$, is the density of $\bar{X}^{-1}$ converging to a unit probability mass at $-2$, the reciprocal of $\mu = (-2+1)/2 = -\frac{1}{2}$? – Dilip Sarwate Apr 16 '13 at 02:24
-
@DilipSarwate $(\bar X)^{-1} \to -2$ a.s. trivially by the SLLN, no ($\bar X \to -\frac 1 2$ on the same set as $\bar X^{-1} \to -2$)? I'm not sure what you are getting at, is there a problem? – guy Apr 16 '13 at 05:05
-
I am not sure if the conditions for SLLN are satisfied. Does $\bar{X}^{-1}$ have a mean? Since the density of $\bar{X}$ is nonzero and continuous at $0$ for all $n$ (when $a < 0 < b$), I suspect that it does not. – Dilip Sarwate Apr 16 '13 at 09:39
-
1@DilipSarwate: Let's restate what guy says in his previous comment. We'll assume, as guy has, that $\mu := \mathbb E X_1 \neq 0$. Now, the SLLN asserts that there exists a set $\Omega_0 \subset \Omega$ such that $\mathbb P (\Omega_0) = 1$ and such that for all $\omega \in \Omega_0$, $\bar X_n(\omega) \to \mu \neq 0$. Then, certainly, $1 / \bar X_n(\omega) \to 1/\mu$ for each $\omega \in \Omega_0$. Hence, ${\bar X}^{-1} \to \mu^{-1}$ almost surely. Does that clarify this particular point? – cardinal Apr 16 '13 at 12:53
-
@cardinal Yes, I understand that. But the OP asked two questions: (i) what is the distribution of $\bar{X}^{-1}$? and (ii) can the CLT be used in getting the distribution? I think the answer to the second question is No. The point brought up guy in comments, not in his answer and further explicated by you says that the SLLN, not the CLT shows that $\bar{X}^{-1}$ converges a.s. to $\mu^{-1}$. Indeed, the whole delta method and CLT invites the inference (by casual readers or beginners) that for (fixed) large $n$, $\bar{X}^{-1}$ is approximately normal while we know the density is $0$ – Dilip Sarwate Apr 18 '13 at 17:42
-
@DilipSarwate $\bar X^{-1}$ is not converging to a point on the boundary of its support. $-2$ is on the interior of the support of $\bar X^{-1}$. $-2$ is at the boundary of the support of $\bar X$, not $\bar X^{-1}$. – guy Apr 18 '13 at 19:02
-
@guy You are correct. I am deleting my previous comment except for the first few words. – Dilip Sarwate Apr 18 '13 at 20:03
-
@cardinal (continued) the density is $0$ on $(a^{−1},b^{−1})$ for all $n$. – Dilip Sarwate Apr 18 '13 at 20:04
-
1@DilipSarwate: Please review the three comments immediately preceding my last one. Those are precisely the remarks I was addressing (not anything in the answer) regarding the SLLN. (cont.) – cardinal Apr 18 '13 at 21:21
-
2@DilipSarwate: (cont.) Regarding your most recent remarks: I certainly hope that the answer and comments "invite the inference...that for (fixed) large $n$, ${\bar X}^{-1}$ is approximately normal", for that is true! :-) The precise statement is found in the answer. The fact that the density of ${\bar X}^{-1}$ is zero on the domain you have specified (assuming $a < 0 < b$) is not a problem: ${\bar X}^{-1}$ will eventually be on one side and its distribution is rapidly concentrating around $\mu^{-1}$ as $n$ grows! There is no problem here. :-) – cardinal Apr 18 '13 at 21:22
In the absence of a response on the questions, I'll make some mention of both possibilities for the order of the mean and reciprocal, and discuss why the limits on the domain of the Uniform matter.
Let $X$ have a continuous uniform distribution on $[a,b]\,, b>a>0$
Let $Y = 1/X$.
Then $f_Y(y) = \frac{1}{(b-a)y^2}$.
$\text{E}(Y) = \frac{1}{(b-a)} \int_a^b y^{-1} dy = \frac{\ln(b)-\ln(a)}{(b-a)}$
The mean doesn't exist if $a$ is not bounded above zero, given $b$ is positive. More generally, you need both limits on the same side of zero and both bounded away from it for the mean to exist.
If the mean doesn't exist, the CLT doesn't apply.
If the mean and variance exist ($b$ and $a$ on the same side of 0 and both bounded away from it), then the CLT should apply to $Y$, and $\sqrt{n}(\overline{Y}-\mu_Y)$ should be asymptotically normal.
But what about $\overline{X}^{-1}$? Note that - again, if $b$ and $a$ on the same side of 0 and both bounded away from it, then $\overline{X}$ will also have limits between $b$ and $a$ on the same side of 0 and bounded away from it, and so its reciprocal will have a mean and variance. While the CLT will apply to $X$ (so $\sqrt{n}(\overline{X}-\mu_X)$ would be asymptotically normal), here you take its reciprocal. At sufficiently large sample sizes, the reciprocal should also be approximately normal (see the Delta method).
However, if $b>0$ and $a = 0$ then the CLT applies to $X$ but the reciprocal has no mean or variance at any finite sample size.
- 282,281
-
2Majte, The comments after your question suggest it is confusing. If you wish to ask about the CLT--which does not have any obvious application to this problem--then please edit your question to clarify how you conceive of the CLT being used here. Glen_b, I think it is possible to read Majte's latest comment as a true apology and not as a veiled second insult. I appreciate your efforts and trust your efforts will be rewarded with more upvotes (I applied mine yesterday...). – whuber Apr 15 '13 at 13:13
Let U~Uniform[0,1]. Let Z = 1/U.
Then P[Z < z] = P[1/U < z] = P[U > 1/z] = 1 - P[U < 1/z] = 1 - 1/z.
Note that z > 1.
On second thought I see this doesn't answer your question. But perhaps some of it can be useful along with your idea of using the CLT.
- 4,516
-
The CLT applies here if you want to evaluate the CDF with an approximation, which is very handy versus working out the actual CDF for the reciprocal of the average of uniform random variables. I experimented using n = 4 Uniform[0,1] variates. The maximum error as compared to the actual cdf was 0.0074, which is an acceptable error for much of applied work. – soakley Apr 23 '13 at 17:34