1

I'm trying without success to solve the following exercise in my econometric textbook:

Show that $\sqrt{N}\left(\widehat{\beta_1} - \beta_1 \right) \xrightarrow{d} \mathcal{N}(0,a^2)$, where $a^2$ is constant, implies that $\widehat{\beta}$ is consistent. (Hint: use Slutsky's theorem).

My attempt is to consider $Y_N \equiv \frac{1}{\sqrt{N}}$, which converges in probability to the constant $c=0$, and $X_N \equiv \sqrt{N}\left(\widehat{\beta_1} - \beta_1 \right)$. Then by Slutsky's theorem I conclude that $X_NY_N = \widehat{\beta_1} - \beta \xrightarrow{d} \mathcal{N}(0, \frac{a^2}{N})$. Now intuitively since $\frac{a^2}{N}$ goes to $0$ as $N \to \infty$ and the mean is $0$, it seems clear that the estimator is consistent... However, I cannot figure out how to connect this result to the definition of convergence in probability.

Noah
  • 33,180
  • 3
  • 47
  • 105

3 Answers3

2

Your choice of $Y_N$ and $X_N$ are good, but you applied the theorem incorrectly. It does not make sense to take a limit in $N$, and end up with some result that depends on $N$ like $\mathcal{N}(0, a^2/N)$.

Let $X \sim \mathcal{N}(0, a^2)$ so that $X_N \overset{d}{\to}X$. You have $Y_N \overset{p}{\to} c = 0$. Slutsky's theorem implies $$\hat{\beta}_N - \beta = X_N Y_N \overset{d}{\to} cX = 0.$$

Then use the fact that convergence in distribution to a constant implies convergence in probability.

angryavian
  • 2,328
0

The last step is intuitively appealing, but it is incorrect because it is imprecise. Think critically about $\rightarrow_d$: it is a limit on the distribution function $F$. So why then is $N$ appearing in the denominator? I think we have an intuitive notion that the normal distribution shrinks to a pointmass. You can make this explicit by applying the definition of convergence in probability, and use a delta-epsilon type argument to show that, for any given $\epsilon$ there is an $N$ for which the probabilistic statement is satisfied for each $n\ge N$.

AdamO
  • 62,637
0

The accepted answer by the OP is wrong.

Assume that $X_N = \sqrt{N}\left(\widehat{\beta_1} - \beta_1 \right) \xrightarrow{d} X \sim D$, where $D$ is any proper statistical distribution. So we do not know nothing about whether this distribution has zero-mean, or anything else. Then for $Y_N = 1/\sqrt{N}$ we still get

$$X_N Y_N \to_d Xc = X\cdot 0 =0,$$

since $X$ is bounded.

This does not prove anything about the consistency of the estimator, obviously since we do not know anything specific about $D$, as already said.

Generally speaking, convergence in distribution does not imply convergence in probability. Some additional properties must hold, and these are listed and elaborated in the following thread

https://stats.stackexchange.com/a/379971/28746

In short, convergence in distribution to a zero-mean random variable implies consistency if

1) For the finite distribution of $\sqrt{N}\left(\widehat{\beta_1} - \beta_1 \right)$, the $2+\delta,\; \delta >0$ absolute moment exists and is finite.

2) The sequences of 1st and 2nd moments of $\sqrt{N}\left(\widehat{\beta_1} - \beta_1 \right)$ converge each to a constant.