In statistics, the delta method is a result concerning the approximate probability distribution for a function of an asymptotically normal statistical estimator from knowledge of the limiting variance of that estimator.
The delta method was derived from propagation of error, and the idea behind was known in the early 19th century. Its statistical application can be traced as far back as 1928 by T. L. Kelley. A formal description of the method was presented by J. L. Doob in 1935. Robert Dorfman also described a version of it in 1938.
Univariate delta method
While the delta method generalizes easily to a multivariate setting, careful motivation of the technique is more easily demonstrated in univariate terms. Roughly, if there is a sequence of random variables Xn satisfying
where θ and σ2 are finite valued constants and denotes convergence in distribution, then
for any function g satisfying the property that g′(θ) exists and is non-zero valued.
Proof in the univariate case
Demonstration of this result is fairly straightforward under the assumption that g′(θ) is continuous. To begin, we use the mean value theorem (i.e.: the first order approximation of a Taylor series using Taylor's theorem):
where lies between Xn and θ. Note that since and , it must be that and since g′(θ) is continuous, applying the continuous mapping theorem yields
where denotes convergence in probability.
Rearranging the terms and multiplying by gives
by assumption, it follows immediately from appeal to Slutsky's theorem that
This concludes the proof.
Proof with an explicit order of approximation
Alternatively, one can add one more step at the end, to obtain the order of approximation:
This suggests that the error in the approximation converges to 0 in probability.
Multivariate delta method
where n is the number of observations and Σ is a (symmetric positive semi-definite) covariance matrix. Suppose we want to estimate the variance of a scalar-valued function h of the estimator B. Keeping only the first two terms of the Taylor series, and using vector notation for the gradient, we can estimate h(B) as
which implies the variance of h(B) is approximately
One can use the mean value theorem (for real-valued functions of many variables) to see that this does not rely on taking first order approximation.
The delta method therefore implies that
or in univariate terms,
Example: the binomial proportion
Suppose Xn is binomial with parameters and n. Since
we can apply the Delta method with g(θ) = log(θ) to see
Hence, even though for any finite n, the variance of does not actually exist (since Xn can be zero), the asymptotic variance of does exist and is equal to
Note that since p>0, as , so with probability converging to one, is finite for large n.
Moreover, if and are estimates of different group rates from independent samples of sizes n and m respectively, then the logarithm of the estimated relative risk has asymptotic variance equal to
This is useful to construct a hypothesis test or to make a confidence interval for the relative risk.
The delta method is often used in a form that is essentially identical to that above, but without the assumption that Xn or B is asymptotically normal. Often the only context is that the variance is "small". The results then just give approximations to the means and covariances of the transformed quantities. For example, the formulae presented in Klein (1953, p. 258) are:
where hr is the rth element of h(B) and Bi is the ith element of B.
Second-order delta method
When g′(θ) = 0 the delta method cannot be applied. However, if g′′(θ) exists and is not zero, the second-order delta method can be applied. By the Taylor expansion, , so that the variance of relies on up to the 4th moment of .
The second-order delta method is also useful in conducting a more accurate approximation of 's distribution when sample size is small. . For example, when follows the standard normal distribution, can be approximated as the weighted sum of a standard normal and a chi-square with degree-of-freedom of 1.
- Taylor expansions for the moments of functions of random variables
- Variance-stabilizing transformation
- Portnoy, Stephen (2013). "Letter to the Editor". The American Statistician. 67 (3): 190–190. doi:10.1080/00031305.2013.820668.
- Kelley, Truman L. (1928). Crossroads in the Mind of Man: A Study of Differentiable Mental Abilities. pp. 49–50. ISBN 978-1-4338-0048-1.
- Doob, J. L. (1935). "The Limiting Distributions of Certain Statistics". Annals of Mathematical Statistics. 6: 160–169. doi:10.1214/aoms/1177732594. JSTOR 2957546.
- Ver Hoef, J. M. (2012). "Who invented the delta method?". The American Statistician. 66 (2): 124–127. doi:10.1080/00031305.2012.687494. JSTOR 23339471.
- Klein, L. R. (1953). A Textbook of Econometrics. p. 258.
- Oehlert, G. W. (1992). "A Note on the Delta Method". The American Statistician. 46 (1): 27–29. doi:10.1080/00031305.1992.10475842. JSTOR 2684406.
- Wolter, Kirk M. (1985). "Taylor Series Methods". Introduction to Variance Estimation. New York: Springer. pp. 221–247. ISBN 0-387-96119-4.
- Asmussen, Søren (2005). "Some Applications of the Delta Method" (PDF). Lecture notes. Aarhus University.
- Feiveson, Alan H. "Explanation of the delta method". Stata Corp.
- Xu, Jun; Long, J. Scott (August 22, 2005). "Using the Delta Method to Construct Confidence Intervals for Predicted Probabilities, Rates, and Discrete Changes" (PDF). Lecture notes. Indiana University.