The three theorems of complete statistics

The Lehmann-Scheffe theorem

In the last article, we discussed the Rao-Blackwell theorem, which allows us to improve any estimator by averaging on a sufficient statistic (i.e. transforming it into a function of a sufficient statistic). Furthermore, the maximum improvement is attained by averaging on a minimal sufficient statistic, and the resulting estimator cannot be improved by further Rao-Blackwellization -- it is the best estimator "of that class".

One may wonder if averaging an unbiased estimator on a particular MSS could lead to the best estimator overall

Let's call such an MSS a complete sufficient statistic, or CSS. What we really require, then, is the uniqueness of the Rao-Blackwellization on a CSS -- then no other estimator could possibly be better, because you can't be better than your Rao-Blackwellization. 

We can then "reverse mathematics" out the definition of a complete statistic: $T$ is called a complete statistic if there is at most one unique unbiased estimator of $\theta$ that can be written as a function of $T$. Equivalently, there is a unique unbiased estimator of 0 that can be written as a function of $T$.

It is in fact sufficient for the estimator to be unique up to disagreement on a set of measure zero -- i.e. the probability of two estimators disagreeing is zero.

The statement that an unbiased CSS is the best estimator overall is known as the Lehmann-Scheffe theorem.


Basu's theorem

Often the data provides information on stuff other than the parameter of interest. We cannot quite ask for a sufficient statistic that provides information only on $\theta$ (by the law of multiple explanations, this is impossible unless the statistic determines $\theta$ with certainty), but there are various ways we can ask that irrelevant information be "minimal". 

One is in the definition of an MSS, as in the last article. The other has to do with something known as ancillary statistics.

An ancillary statistic is a statistic that provides no information on $\theta$ -- e.g. $X-\bar{X}$ does not provide information on $\mu$ in a normal model. So we might want to ask for a sufficient statistic such that no function of it is ancillary. Well, this is implied by completeness but is still not as strong as completeness -- it is possible to have statistics which aren't independent of $\theta$ but still have constant mean.

In fact, a stronger result, known as Basu's theorem, is implied by completeness: not only is no function of a CSS ancillary, but a CSS is independent of any ancillary statistic $V$. This follows from the fact that $P(V|T)-P(V)$ is itself a function of $T$, and thus its mean being 0 implies its triviality.


Bahadur's theorem

In the above theorems, CSSs seems like everything we wanted from MSSs. In fact, every CSS is an MSS.

No comments:

Post a Comment