Leverage (statistics)

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In statistics and in particular in regression analysis, leverage is a measure of how far away the independent variable values of an observation are from those of the other observations.

High-leverage points are those observations, if any, made at extreme or outlying values of the independent variables such that the lack of neighboring observations means that the fitted regression model will pass close to that particular observation.[1]

Modern computer packages for statistical analysis include, as part of their facilities for regression analysis, various quantitative measures for identifying influential observations: among these measures is partial leverage, a measure of how a variable contributes to the leverage of a datum.

Linear regression model[edit]

Definition[edit]

In the linear regression model, the leverage score for the  i^{th} data unit is defined as:

 h_{ii}=(H)_{ii},

the  i^{th} diagonal element of the hat matrix  H=X(X^{\top}X)^{-1}X^{\top}, where ^{\top} denotes the matrix transpose.

The leverage score is also known as the observation self-sensitivity or self-influence,[2] as shown by

h_{ii} = \frac{\partial\hat{y}_i}{\partial y_i},

where \hat{y}_i and {y}_i are the fitted and measured observation, respectively.

Bounds on leverage[edit]

 0 \leq h_{ii} \leq 1 .

Proof[edit]

First, note that H is an idempotent matrix:  H^2=X(X^{\top}X)^{-1}X^{\top}X(X^{\top}X)^{-1}X^{\top}=XI(X^{\top}X)^{-1}X^{\top}=H . Also, observe that  H is symmetric. So equating the ii element of H to that of H 2, we have

 h_{ii}=h_{ii}^2+\sum_{j\neq i}h_{ij}^2 \geq 0

and

 h_{ii} \geq h_{ii}^2 \implies h_{ii} \leq 1 .

Effect on residual variance[edit]

If we are in an ordinary least squares setting with fixed X, regression errors \epsilon_i, and

 Y=X\beta+\epsilon
\operatorname{Var}(\epsilon)=\sigma^2I

then  \operatorname{Var}(e_i)=(1-h_{ii})\sigma^2 where  e_i=Y_i-\hat{Y}_i (the i th regression residual).

In other words, if the model errors  \epsilon are homoscedastic, an observation's leverage score determines the degree of noise in the model's misprediction of that observation.

Proof[edit]

First, note that  I-H is idempotent and symmetric. This gives,

 \operatorname{Var}(e)=\operatorname{Var}((I-H)Y)=(I-H)\operatorname{Var}(Y)(I-H)^{\top}=\sigma^2(I-H)^2=\sigma^2(I-H).

Thus  \operatorname{Var}(e_i)=(1-h_{ii})\sigma^2 .

Studentized residuals[edit]

The corresponding studentized residual—the residual adjusted for its observation–specific residual variance—is then

t_i = {e_i\over \widehat{\sigma} \sqrt{1-h_{ii}\  }}

where \widehat{\sigma} is an appropriate estimate of \sigma.

See also[edit]

References[edit]

  1. ^ Everitt, B. S. (2002). Cambridge Dictionary of Statistics. Cambridge University Press. ISBN 0-521-81099-X. 
  2. ^ Cardinali, C. (June 2013). "Data Assimilation: Observation influence diagnostic of a data assimilation system" (PDF).