Mills ratio

(Redirected from Inverse Mills ratio)

In probability theory, the Mills ratio (or Mills's ratio[1]) of a continuous random variable ${\displaystyle X}$ is the function

${\displaystyle m(x):={\frac {{\bar {F}}(x)}{f(x)}},}$

where ${\displaystyle f(x)}$ is the probability density function, and

${\displaystyle {\bar {F}}(x):=\Pr[X>x]=\int _{x}^{+\infty }f(u)\,du}$

is the complementary cumulative distribution function (also called survival function). The concept is named after John P. Mills.[2] The Mills ratio is related to the hazard rate h(x) which is defined as[3]

${\displaystyle h(x):=\lim _{\delta \to 0}{\frac {1}{\delta }}\Pr[xx]}$

by

${\displaystyle m(x)={\frac {1}{h(x)}}.}$

Upper and lower bounds

When ${\displaystyle X}$ has a standard normal distribution then the following bounds hold for ${\displaystyle x>0}$:

${\displaystyle {\frac {x}{x^{2}+1}}[4][5]

Example

If ${\displaystyle X}$ has standard normal distribution then

${\displaystyle m(x)\sim 1/x,\,}$

where the sign ${\displaystyle \sim }$ means that the quotient of the two functions converges to 1 as ${\displaystyle x\to +\infty }$, see Q-function for details. More precise asymptotics can be given.[6]

Inverse Mills ratio

The inverse Mills ratio is the ratio of the probability density function to the complementary cumulative distribution function of a distribution. Its use is often motivated by the following property of the truncated normal distribution. If X is a random variable having a normal distribution with mean μ and variance σ2, then

{\displaystyle {\begin{aligned}&\operatorname {E} [\,X\,|\ X>\alpha \,]=\mu +\sigma {\frac {\phi {\big (}{\tfrac {\alpha -\mu }{\sigma }}{\big )}}{1-\Phi {\big (}{\tfrac {\alpha -\mu }{\sigma }}{\big )}}},\\&\operatorname {E} [\,X\,|\ X<\alpha \,]=\mu -\sigma {\frac {\phi {\big (}{\tfrac {\alpha -\mu }{\sigma }}{\big )}}{\Phi {\big (}{\tfrac {\alpha -\mu }{\sigma }}{\big )}}},\end{aligned}}}

where ${\displaystyle \alpha }$ is a constant, ${\displaystyle \phi }$ denotes the standard normal density function, and ${\displaystyle \Phi }$ is the standard normal cumulative distribution function. The two fractions are the inverse Mills ratios.[7]

Use in regression

A common application of the inverse Mills ratio (sometimes also called “non-selection hazard”) arises in regression analysis to take account of a possible selection bias. If a dependent variable is censored (i.e., not for all observations a positive outcome is observed) it causes a concentration of observations at zero values. This problem was first acknowledged by Tobin (1958), who showed that if this is not taken into consideration in the estimation procedure, an ordinary least squares estimation will produce biased parameter estimates.[8] With censored dependent variables there is a violation of the Gauss–Markov assumption of zero correlation between independent variables and the error term.[9]

James Heckman proposed a two-stage estimation procedure using the inverse Mills ratio to correct for the selection bias.[10][11] In a first step, a regression for observing a positive outcome of the dependent variable is modeled with a probit model. The inverse Mills ratio must be generated from the estimation of a probit model, a logit cannot be used. The probit model assumes that the error term follows a standard normal distribution.[10] The estimated parameters are used to calculate the inverse Mills ratio, which is then included as an additional explanatory variable in the OLS estimation.[12]

References

1. ^ Grimmett, G.; Stirzaker, S. (2001). Probability Theory and Random Processes (3rd ed.). Cambridge. p. 98. ISBN 0-19-857223-9.
2. ^ Mills, John P. (1926). "Table of the Ratio: Area to Bounding Ordinate, for Any Portion of Normal Curve". Biometrika. 18 (3/4): 395–400. doi:10.1093/biomet/18.3-4.395. JSTOR 2331957.
3. ^ Klein, J. P.; Moeschberger, M. L. (2003). Survival Analysis: Techniques for Censored and Truncated Data. New York: Springer. p. 27. ISBN 0-387-95399-X.
4. ^ "Upper & lower bounds for the normal distribution function". www.johndcook.com. 2018-06-02. Retrieved 2023-12-20.
5. ^ Wainwright MJ. High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge: Cambridge University Press; 2019. doi:10.1017/9781108627771
6. ^ Small, Christopher G. (2010). Expansions and Asymptotics for Statistics. Monographs on Statistics & Applied Probability. Vol. 115. CRC Press. pp. 48, 50–51, 88–90. ISBN 978-1-4200-1102-9..
7. ^ Greene, W. H. (2003). Econometric Analysis (Fifth ed.). Prentice-Hall. p. 759. ISBN 0-13-066189-9.
8. ^ Tobin, J. (1958). "Estimation of relationships for limited dependent variables" (PDF). Econometrica. 26 (1): 24–36. doi:10.2307/1907382. JSTOR 1907382.
9. ^ Amemiya, Takeshi (1985). Advanced Econometrics. Cambridge: Harvard University Press. pp. 366–368. ISBN 0-674-00560-0.
10. ^ a b Heckman, J. J. (1979). "Sample Selection as a Specification Error". Econometrica. 47 (1): 153–161. doi:10.2307/1912352. JSTOR 1912352.
11. ^ Amemiya, Takeshi (1985). Advanced Econometrics. Cambridge: Harvard University Press. pp. 368–373. ISBN 0-674-00560-0.
12. ^ Heckman, J. J. (1976). "The common structure of statistical models of truncation, sample selection and limited dependent variables and a simple estimator for such models". Annals of Economic and Social Measurement. 5 (4): 475–492.