Hodges' estimator

(Redirected from Hodges’ estimator)

In statistics, Hodges’ estimator[1] (or the Hodges–Le Cam estimator[2]), named for Joseph Hodges, is a famous[3] counter example of an estimator which is "superefficient", i.e. it attains smaller asymptotic variance than regular efficient estimators. The existence of such a counterexample is the reason for the introduction of the notion of regular estimators.

Hodges’ estimator improves upon a regular estimator at a single point. In general, any superefficient estimator may surpass a regular estimator at most on a set of Lebesgue measure zero.[4]

Construction

Suppose ${\displaystyle \scriptstyle {\hat {\theta }}_{n}}$ is a "common" estimator for some parameter θ: it is consistent, and converges to some asymptotic distribution Lθ (usually this is a normal distribution with mean zero and variance which may depend on θ) at the n-rate:

${\displaystyle {\sqrt {n}}({\hat {\theta }}_{n}-\theta )\ {\xrightarrow {d}}\ L_{\theta }\ .}$

Then the Hodges’ estimator ${\displaystyle \scriptstyle {\hat {\theta }}_{n}^{H}}$ is defined as [5]

${\displaystyle {\hat {\theta }}_{n}^{H}={\begin{cases}{\hat {\theta }}_{n},&{\text{if }}|{\hat {\theta }}_{n}|\geq n^{-1/4},{\text{ and}}\\0,&{\text{if }}|{\hat {\theta }}_{n}|

This estimator is equal to ${\displaystyle \scriptstyle {\hat {\theta }}_{n}}$ everywhere except on the small interval [−n−1/4, n−1/4], where it is equal to zero. It is not difficult to see that this estimator is consistent for θ, and its asymptotic distribution is [6]

{\displaystyle {\begin{aligned}&n^{\alpha }({\hat {\theta }}_{n}^{H}-\theta )\ {\xrightarrow {d}}\ 0,\qquad {\text{when }}\theta =0,\\&{\sqrt {n}}({\hat {\theta }}_{n}^{H}-\theta )\ {\xrightarrow {d}}\ L_{\theta },\quad {\text{when }}\theta \neq 0,\end{aligned}}}

for any αR. Thus this estimator has the same asymptotic distribution as ${\displaystyle \scriptstyle {\hat {\theta }}_{n}}$ for all θ ≠ 0, whereas for θ = 0 the rate of convergence becomes arbitrarily fast. This estimator is superefficient, as it surpasses the asymptotic behavior of the efficient estimator ${\displaystyle \scriptstyle {\hat {\theta }}_{n}}$ at least at one point θ = 0. In general, superefficiency may only be attained on a subset of measure zero of the parameter space Θ.

Example

The mean square error (times n) of Hodges’ estimator. Blue curve corresponds to n = 5, purple to n = 50, and olive to n = 500.[7]

Suppose x1, …, xn is an iid sample from normal distribution N(θ, 1) with unknown mean but known variance. Then the common estimator for the population mean θ is the arithmetic mean of all observations: ${\displaystyle \scriptstyle {\bar {x}}}$. The corresponding Hodges’ estimator will be ${\displaystyle \scriptstyle {\hat {\theta }}_{n}^{H}\;=\;{\bar {x}}\cdot \mathbf {1} \{|{\bar {x}}|\,\geq \,n^{-1/4}\}}$, where 1{…} denotes the indicator function.

The mean square error (scaled by n) associated with the regular estimator x is constant and equal to 1 for all θ’s. At the same time the mean square error of the Hodges’ estimator ${\displaystyle \scriptstyle {\hat {\theta }}_{n}^{H}}$ behaves erratically in the vicinity of zero, and even becomes unbounded as n → ∞. This demonstrates that the Hodges’ estimator is not regular, and its asymptotic properties are not adequately described by limits of the form (θ fixed, n → ∞).

Notes

1. ^ Vaart (1998, p. 109)
2. ^ Kale (1985)
3. ^ Bickel (1998, p. 21)
4. ^ Vaart (1998, p. 116)
5. ^ Stoica & Ottersten (1996, p. 135)
6. ^ Vaart (1998, p. 109)
7. ^ Vaart (1998, p. 110)

References

• Bickel, Peter J.; Klaassen, Chris A.J.; Ritov, Ya’acov; Wellner, Jon A. (1998). Efficient and adaptive estimation for semiparametric models. Springer: New York. ISBN 0-387-98473-9.
• Kale, B.K. (1985). "A note on the super efficient estimator". Journal of Statistical Planning and Inference. 12: 259–263. doi:10.1016/0378-3758(85)90074-6.
• Stoica, P.; Ottersten, B. (1996). "The evil of superefficiency". Signal Processing. 55: 133–136. doi:10.1016/S0165-1684(96)00159-4.
• Vaart, A. W. van der (1998). Asymptotic statistics. Cambridge University Press. ISBN 978-0-521-78450-4.