6

I am trying to prove that

There is no UMVUE for $\theta$ for the distribution $\text{Unif}\{\theta-1, \theta, \theta+1\}$, $\theta$ is an integer.

Here is what I have attempted.

I am trying to use the theorem 1.7 from Lehmann and Casella's book but I got stuck.

The theorem says

Let $X$ have a distribution $\mathbb{P}_\theta$, $\theta\in\Theta$, let $\delta$ be an estimator so that $\mathbb{E}_\theta\delta^2<\infty$, and let $\mathcal{U}$ denote the set of all unbiased estimator of zero, which also satisfy $\mathbb{E}_\theta U^2<\infty,\forall U\in\mathcal{U}$. Then $\delta$ is a UMVUE for its expectation $g(\theta)=\mathbb{E}(\delta)$ if and only if $$\mathbb{E}_\theta(\delta U)=0,\forall U\in\mathcal{U}\text{ and }\theta\in\Theta.$$

Suppose $\{X\}_{i=1}^n\overset{i.i.d}{\sim}\text{Unif}\{\theta-1, \theta, \theta+1\}$. Thus, $$\mathbb{P}(X=\theta-1)=\mathbb{P}(X=\theta)=\mathbb{P}(X=\theta+1)=1/3$$.

All the unbiased estimator of $0$ is of the form $$\mathcal{U}=\{a\mathbb{1}_{(X=\theta-1)}+b\mathbb{1}_{(X=\theta)}+c\mathbb{1}_{(X=\theta+1)},\text{ }\forall a,b,c\in\mathbb{R},a+b+c=1\}$$ This is because $\forall U\in\mathcal{U}$, we have $\mathbb{E}(U)=1/3(a+b+c)=0.$

Also, notice $$\mathbb{E}(X)=\theta$$

I was trying to use the theorem, but I do not know how to proceed further.

Tan
  • 1,479
  • 3
    Your set $\mathcal U$ is not made of estimators since the expressions involve $\theta$. – Xi'an Jan 25 '21 at 19:01
  • Ha, that's right. I probably need $X_{(n)}-X_{(1)}$ to get the unbiased estimator of zero. Thanks. – Tan Jan 25 '21 at 20:11
  • Not sure if this helps but there is no complete sufficient statistic in this model: https://stats.stackexchange.com/q/61990/119261. – StubbornAtom May 05 '21 at 15:55

2 Answers2

2

I think you have made the right attempt, since the method of zero estimator is the necessary and sufficient condition for a UMVUE to exist. And here is my proof:

Let's first assume there exist a UMVUE $\hat\theta $ for $\theta$. Then, by the theorem, any zero estimator $\eta$ will have:

  1. $E_{\theta}\eta = 0$, for any $∈Θ$.
  2. $E_[\eta \hat \theta]=0$

So next step is to find an estimator that breaks one of these conditions.

Let's start with the first term: $$E_[\eta ]=\frac{1}{3}[\eta(\theta-1) + \eta(\theta) + \eta(\theta+1)] = 0$$ Then any function that satisfies $\eta(x-1) + \eta(x) + \eta(x+1) = 0 $ will meet the requirement. This surely can not derive $E_{\theta}(\eta \hat\theta)= 0 $, so in this case there is no UMVUE.

PZH
  • 73
2

Let's approach this from first principles.

An estimator $t$ assigns some guess of $\theta$ to any possible outcome $X.$ Since the possible outcomes are integers, write this guess as $t_i$ when $X=i$ for any integer $i.$

We are hoping to find estimators that tend to be close to $\theta$ when $\theta$ is the parameter. Clearly, when $i$ is observed the possible values of $\theta$ are limited (for sure) to the set $\{i+1,i,i-1\}.$ Thus, a good estimator is likely to guess $\theta$ is close to the observation $i.$ Let us therefore express $t$ in terms of how far it departs from the observation; namely, let

$$t_i = i + \delta_i.$$

When $\theta$ is the parameter, the outcomes $\theta-1,$ $\theta,$ and $\theta+1$ have equal probabilities of $1/3$ and all other integers have zero probability. Consequently,

  1. The expectation of $t$ when $\theta$ is the parameter is $$E(t\mid \theta=i) = \frac{1}{3}\left(t_{i-1} + t_i + t_{i+1}\right) = i + \frac{1}{3}\left(\delta_{i-1} + \delta_i + \delta_{i+1}\right).$$ Because $t$ must be unbiased, this quantity equals $i$ no matter what $i$ might be, showing that for all $i,$ $$\delta_{i-1} + \delta_{i} + \delta_{i+1}=0.$$ Already this is a huge restriction, because if we specify (say) $\delta_0$ and $\delta_1,$ this relation recursively requires $\delta_{-1} = \delta_2 = -(\delta_0 + \delta_1),$ *etc., thereby completely determining the estimator.

  2. The variance of $t$ is $$\operatorname{Var}(t\mid \theta=i) = \frac{1}{3}\left((t_{i-1}-i)^2 + (t_i-i)^2 + (t_{i+1}-i)^2\right) \\= \frac{1}{3}\left((\delta_{i-1}-1)^2 + \delta_i^2 + (\delta_{i+1}+1)^2\right).$$ Among all unbiased estimators, this must have the smallest variance for all $i.$

It is a straightforward exercise in algebra (or Calculus, using a Lagrange multiplier) to show that for a specific $i,$ a minimum of $(2)$ can be obtained subject to the constraint $(1)$ and implies $\delta_{i-1}=\delta_i=\delta_{i+1}.$ Since this must hold for all $i,$ clearly the $\delta_i$ are all equal, whence they must all equal $0$ (because $\delta_1 = \delta_2 = -(\delta_0+\delta_1) = -2\delta_1$ has the unique solution $\delta_1=0,$ etc.).

Consequently, if an UMVUE exists, its variance is a constant given by $(2),$ equal to $2/3.$ Unfortunately, there are unbiased estimators that achieve smaller variances for specific values of $\theta.$

For instance, suppose you had a strong belief that $\theta=0.$ You might then adjust your estimator to guess $\theta=0$ whenever an outcome consistent with that guess showed up. That is, you would set $t_0=t_1=t_{-1}=0.$ That is equivalent to $\delta_{-1}=1,$ $\delta_0=0,$ and $\delta_1=-1.$ As we have remarked earlier, these initial conditions determine $t$ completely from the recursion $(1).$ Its variance when $\theta=0$ is zero, because it always guesses the correct value of $\theta.$ You can't do any better than that! Moreover, $0 \ll 2/3$ is a huge improvement. But compensating for that is a larger variance for certain other values of $\theta.$ For instance, since $\delta_2 = \delta_{-1} = 1,$ when $\theta=1$ the possible outcomes are $0,1,2,$ for which $t$ guesses $0,$ $0,$ and $3,$ respectively, for a variance of

$$\frac{1}{3}\left((0-1)^2 + (0-1)^2 + (3-1)^2\right) = 2 \gg \frac{2}{3}.$$

This contradiction--obtaining a lower variance for certain values of $\theta$--shows no UMVUE exists.

You might enjoy re-interpreting $\delta$ as an estimator of 0 ;-).

whuber
  • 322,774
  • Setting partial derivatives of the Lagrangian equal to zero gives $2/3(\delta_{i-1}-1)+\lambda = 2/3 \delta_i + \lambda = 2/3(\delta_{i+1} + 1) + \lambda = \delta_{i-1}+\delta_i+\delta_{i+1}=0$. Why does this imply $\delta_{i-1}=\delta_i=\delta_{i+1}$? I thought it implies $\delta_{i-1}-1=\delta_i=\delta_{i+1}+1$, which also satisfies the constraint that they sum to zero. – Daniel Xiang Jul 18 '22 at 18:06
  • @Daniel The equations have to hold for all $i.$ See the text immediately above that result. – whuber Jul 18 '22 at 18:22
  • Not sure I understand- if they hold for all $i$, then they hold for $i=0$. Then the first equation reads $\delta_{-1}-1=\delta_0$, which isn't satisfied by $\delta_{-1}=\delta_0=0$. – Daniel Xiang Jul 18 '22 at 20:35
  • @Daniel The integers include the negative numbers. – whuber Jul 18 '22 at 20:37