3

This answer gives an answer for discrete distributions referencing Halmos (1946). However, I am looking for a more general result. For this, the references provided in the previous answer only dealt with creating non-negative estimators from already existing unbiased estimators.

Bickel and Lehmann (1969) give a characterization of unbiasedly estimable functionals of distributions that are absolutely continuous wrt a measure $\mu$. However, the characterization is not constructive. Is there a constructive version of this result (Lemma 4.3, and thus Theorem 4.2)? After a lot searching, nothing came up.

If there is not, how feasible is deriving a general construction?

User1865345
  • 8,202
  • I discuss a similar problem in "The Sampling Problem". Is any of the information I give there helpful? Perhaps the results of Akahira et al., 2002, "Unbiased estimation in sequential binomial sampling", may help you. – Peter O. Jul 25 '23 at 08:54

1 Answers1

0

The following is one general construction that builds an unbiased estimator of $f(E[X])$ given Bernoulli coin flips.

Suppose there is a coin that shows heads (or 1) with the unknown probability λ, where $0\lt\lambda\lt 1$. The goal is now to produce random variates whose expected value is $f(\lambda)$, where $f(\lambda)$ is a function on the closed unit interval and need not be continuous. This can be done with the following algorithm (Akahira and Koike 1998), (Akahira et al. 1992).

  • First, define a sequence $g_1(\lambda), g_2(\lambda), g_3(\lambda), ...$ of polynomials in Bernstein form, where the number after $g$ is the degree of the polynomial. The polynomials must converge pointwise to $f(\lambda)$. Denote $g_n[k]$ as the $k$-th Bernstein coefficient of the polynomial $g_n$, where $0\le k\le n$.
  • Second, define probabilities $p_0, p_1, p_2, ...$ that are positive and sum to 1 (except $p_0$ can be 0). An example is $p_n = p(1-p)^n$, where $0\lt p\lt 1$.
  1. Generate at random an integer (which is 0 or greater) that equals $i$ with probability $p_i$. Call the integer $n$.
  2. Flip the input coin $n$ times, then set $k$ to the number of times 1 is returned this way.
  3. If $n$ is 0, define $E(n, k)$ as 0. Otherwise, define $E(n, k)$ as $(g_n[k]-k\cdot g_{n-1}[k-1]/n - (n-k) g_{n-1}[k]/n)/p_n$ (letting $g_0[k]=0$ letting $g_m[j]=0$ whenever $j\lt 0$ or $j\gt m$). (Note: This implies that if $g_n = g_{n-1}$, then $E(n,k)$ is 0.)
  4. Return $E(n, k)$.

The output returned in step 4 will have expected value $f(\lambda)$ if the following condition is met: The sum of the polynomials— $$p_n|E(n,0)|{n\choose 0}\lambda^0(1-\lambda)^{n-0} + ... + p_n|E(n,n)|{n\choose n}\lambda^n(1-\lambda)^{n-n},$$ over all integers $n\ge 0$, is finite whenever $0\lt\lambda\lt 1$ (Akahira et al. 1992). It can be shown that this condition is the same as: $g_1(\lambda) + |g_2(\lambda) - g_1(\lambda)| + |g_3(\lambda) - g_2(\lambda)| + ...$ is finite whenever $0\lt\lambda\lt 1$ (see also Gayek 1995).

The general problem of building unbiased estimators from a sequence of random variates is what I call the "sampling problem". Perhaps, any of the information I give there is helpful to you.

References:

Akahira, M., & Koike, K. (1998). On the properties of statistical sequential decision procedures. Sugaku expositions, 11(2).

AKAHIRA, Masafumi, Kei TAKEUCHI, and Ken-ichi KOIKE. "Unbiased estimation in sequential binomial sampling", Rep. Stat. Appl. Res., JUSE 39 1-13, 1992.

Gajek, L. (1995). Note on unbiased estimability of the larger of two mean values. Applicationes Mathematicae, 23(2), 239-245.

Peter O.
  • 1,194