Jackknife resampling

From Wikipedia, the free encyclopedia
  (Redirected from Jackknife method)
Jump to: navigation, search

In statistics, the jackknife is a resampling technique especially useful for variance and bias estimation. The jackknife predates other common resampling methods such as the bootstrap. The jackknife estimator of a parameter is found by systematically leaving out each observation from a dataset and calculating the estimate and then finding the average of these calculations. Given a sample of size N, the jackknife estimate is found by aggregating the estimates of each N-1 estimate in the sample.

The jackknife technique was developed by Maurice Quenouille (1949, 1956). John Tukey (1958) expanded on the technique and proposed the name "jackknife" since, like a Boy Scout's jackknife, it is a "rough and ready" tool that can solve a variety of problems even though specific problems may be more efficiently solved with a purpose-designed tool.[1]

The jackknife is a linear approximation of the bootstrap.[1]

Estimation[edit]

The jackknife estimate of a parameter can be found by estimating the parameter for each subsample omitting the ith observation to estimate the previously unknown value of a parameter (say \bar{x}_i).[2]

\bar{x}_i =\frac{1}{n-1} \sum_{j \neq i}^n x_j

Variance estimation[edit]

An estimate of the variance of an estimator can be calculated using the jackknife technique.

\operatorname{Var}_\mathrm {(jackknife)}=\frac{n-1}{n} \sum_{i=1}^n (\bar{x}_i - \bar{x}_\mathrm{(.)})^2

where \bar{x}_i is the parameter estimate based on leaving out the ith observation, and \bar{x}_\mathrm{(.)} is the estimator based on all of the subsamples.[3]

Bias estimation and correction[edit]

The jackknife technique can be used to estimate the bias of an estimator calculated over the entire sample. Say \hat{\theta} is the calculated estimator parameter of interest based on all {n} observations. Here,

\hat{\theta}_\mathrm{(.)}=\frac{1}{n} \sum_{i=1}^n \hat{\theta}_\mathrm{(i)}

where \hat{\theta}_\mathrm{(i)} is the estimation of parameter of interest based on sample omitting ith observation (jackknife estimator), and \hat{\theta}_\mathrm{(.)} is the mean jackknife estimator. The parameter bias estimate is,

\widehat{\text{Bias}}_\mathrm{(\theta)}=n\hat{\theta} - (n-1)\hat{\theta}_\mathrm{(.)}

This removes the bias in the special case that the bias is O(N^{-1}) and to O(N^{-2}) in other cases.[1]

This provides an estimated correction of bias due to the estimation method. The jackknife does not correct for a biased sample.

Notes[edit]

  1. ^ a b c Cameron & Trivedi 2005, p. 375.
  2. ^ Efron 1982, p. 2.
  3. ^ Efron 1982, p. 14.

References[edit]