# Leverage (statistics)

In statistics and in particular in regression analysis, leverage is a measure of how far away the independent variable values of an observation are from those of the other observations. High-leverage points, if any, are outliers with respect to the independent variables. That is, high-leverage points have no neighboring points in ${\displaystyle \mathbb {R} ^{p}}$ space, where ${\displaystyle {p}}$ is the number of independent variables in a regression model. This makes the fitted model likely to pass close to a high leverage observation.[1] Hence high-leverage points have the potential to cause large changes in the parameter estimates when they are deleted i.e., to be influential points. Although an influential point will typically have high leverage, a high leverage point is not necessarily an influential point. The leverage is typically defined as the diagonal elements of the hat matrix.

## Definition and interpretations

Consider the linear regression model ${\displaystyle {y}_{i}={\boldsymbol {x}}_{i}^{\top }{\boldsymbol {\beta }}+{\varepsilon }_{i}}$, ${\displaystyle i=1,\,2,\ldots ,\,n}$. That is, ${\displaystyle {\boldsymbol {y}}=\mathbf {X} {\boldsymbol {\beta }}+{\boldsymbol {\varepsilon }}}$, where, ${\displaystyle \mathbf {X} }$ is the ${\displaystyle n\times p}$ design matrix whose rows correspond to the observations and whose columns correspond to the independent or explanatory variables. The leverage score for the ${\displaystyle {i}^{th}}$ independent observation ${\displaystyle {\boldsymbol {x}}_{i}}$ is given as:

${\displaystyle h_{ii}=\left[\mathbf {H} \right]_{ii}={\boldsymbol {x}}_{i}^{\top }\left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}{\boldsymbol {x}}_{i}}$, the ${\displaystyle {i}^{th}}$ diagonal element of the ortho-projection matrix (a.k.a hat matrix) ${\displaystyle \mathbf {H} =\mathbf {X} \left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }}$.

Thus the ${\displaystyle {i}^{th}}$ leverage score can be viewed as the 'weighted' distance between ${\displaystyle {\boldsymbol {x}}_{i}}$ to the mean of ${\displaystyle {\boldsymbol {x}}_{i}}$'s (see its relation with Mahalanobis distance). It can also be interpreted as the degree by which the ${\displaystyle {i}^{th}}$ measured (dependent) value (i.e., ${\displaystyle y_{i}}$) influences the ${\displaystyle {i}^{th}}$ fitted (predicted) value (i.e., ${\displaystyle {\widehat {y\,}}_{i}}$): mathematically,

${\displaystyle h_{ii}={\frac {\partial {\widehat {y\,}}_{i}}{\partial y_{i}}}}$.

Hence, the leverage score is also known the observation self-sensitivity or self-influence.[2] Using the fact that ${\displaystyle {\boldsymbol {\widehat {y}}}={\mathbf {H} }{\boldsymbol {y}}}$ (i.e., the prediction ${\displaystyle {\boldsymbol {\widehat {y}}}}$ is ortho-projection of ${\displaystyle {\boldsymbol {y}}}$ onto range space of ${\displaystyle \mathbf {X} }$) in the above expression, we get ${\displaystyle h_{ii}=\left[\mathbf {H} \right]_{ii}}$. Note that this leverage depends on the values of the explanatory variables ${\displaystyle (\mathbf {X} )}$ of all observations but not on any of the values of the dependent variables ${\displaystyle (y_{i})}$.

## Properties

1. The leverage  ${\displaystyle h_{ii}}$ is a number between 0 and 1, ${\displaystyle 0\leq h_{ii}\leq 1.}$ Proof: Note that ${\displaystyle \mathbf {H} }$ is idempotent matrix (${\displaystyle \mathbf {H} ^{2}=\mathbf {H} }$) and symmetric (${\displaystyle h_{ij}=h_{ji}}$). Thus, by using the fact that ${\displaystyle \left[\mathbf {H} ^{2}\right]_{ii}=\left[\mathbf {H} \right]_{ii}}$, we have ${\displaystyle h_{ii}=h_{ii}^{2}+\sum _{j\neq i}h_{ij}^{2}}$. Since we know that ${\displaystyle \sum _{j\neq i}h_{ij}^{2}\geq 0}$, we have ${\displaystyle h_{ii}\geq h_{ii}^{2}\implies 0\leq h_{ii}\leq 1}$.
2. Sum of leverages is equal to the number of parameters ${\displaystyle (p)}$ in ${\displaystyle {\boldsymbol {\beta }}}$ (including the intercept). Proof: ${\displaystyle \sum _{j=1}^{n}h_{ii}=\operatorname {Tr} (\mathbf {H} )=\operatorname {Tr} \left(\mathbf {X} \left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\mathbf {X} ^{\top }\right)=\operatorname {Tr} \left(\mathbf {X} ^{\top }\mathbf {X} \left(\mathbf {X} ^{\top }\mathbf {X} \right)^{-1}\right)=\operatorname {Tr} (\mathbf {I} _{p})=p}$.

## Determination of outliers in ${\displaystyle \mathbf {X} }$ using leverages

Large leverage ${\displaystyle {h_{ii}}}$ correspond ${\displaystyle {{\boldsymbol {x}}_{i}}}$ that is extreme. A common rule is to identify ${\displaystyle {{\boldsymbol {x}}_{i}}}$ whose leverage value ${\displaystyle {h}_{ii}}$ is more than 2 times larger than the mean leverage ${\displaystyle {\bar {h}}={\dfrac {1}{n}}\sum _{i=1}^{n}h_{ii}={\dfrac {p}{n}}}$ (see property 2 above). That is, if ${\displaystyle h_{ii}>2{\dfrac {p}{n}}}$, ${\displaystyle {{\boldsymbol {x}}_{i}}}$ shall be considered as an outlier. Some statisticians also prefer the threshold of ${\displaystyle 3p/{n}}$ instead of ${\displaystyle 2p/{n}}$

## Relation to Mahalanobis distance

Leverage is closely related to the Mahalanobis distance[3] (proof[4]). Specifically, for some ${\displaystyle n\times p}$ matrix ${\displaystyle \mathbf {X} }$, the squared Mahalanobis distance of ${\displaystyle {{\boldsymbol {x}}_{i}}}$ (where ${\displaystyle {\boldsymbol {x}}_{i}^{\top }}$is ${\displaystyle {i}^{th}}$ row of ${\displaystyle \mathbf {X} }$) from the vector of mean ${\displaystyle {\widehat {\boldsymbol {\mu }}}=\sum _{i=1}^{n}{\boldsymbol {x}}_{i}}$ of length ${\displaystyle p}$, is ${\displaystyle D^{2}({\boldsymbol {x}}_{i})=({\boldsymbol {x}}_{i}-{\widehat {\boldsymbol {\mu }}})^{\top }\mathbf {S} ^{-1}({\boldsymbol {x}}_{i}-{\widehat {\boldsymbol {\mu }}})}$, where ${\displaystyle \mathbf {S} =\mathbf {X} ^{\top }\mathbf {X} }$ is the estimated covariance matrix of ${\displaystyle {{\boldsymbol {x}}_{i}}}$'s. This is related to the leverage ${\displaystyle h_{ii}}$ of the hat matrix of ${\displaystyle \mathbf {X} }$ after appending a column vector of 1's to it. The relationship between the two is:

${\displaystyle D^{2}({\boldsymbol {x}}_{i})=(n-1)(h_{ii}-{\tfrac {1}{n}})}$

This relationship enables us to decompose leverage into meaningful components so that some sources of high leverage can be investigated analytically.[5]

## Relation to influence functions

In a regression context, we combine leverage and influence functions to compute the degree to which estimated coefficients would change if we removed a single data point. Denoting the regression residuals as ${\displaystyle {\widehat {e}}_{i}=y_{i}-{\boldsymbol {x}}_{i}^{\top }{\widehat {\boldsymbol {\beta }}}}$ , one can compare the estimated coefficient ${\displaystyle {\widehat {\boldsymbol {\beta }}}}$ to the leave-one-out estimated coefficient ${\displaystyle {\widehat {\boldsymbol {\beta }}}^{(-i)}}$ using the formula [6][7]

${\displaystyle {\widehat {\boldsymbol {\beta }}}-{\widehat {\boldsymbol {\beta }}}^{(-i)}={\frac {(\mathbf {X} ^{\top }\mathbf {X} )^{-1}{\boldsymbol {x}}_{i}{\widehat {e}}_{i}}{1-h_{ii}}}}$

Young (2019) uses a version of this formula after residualizing controls.[8] To gain intuition for this formula, note that ${\displaystyle {\frac {\partial {\hat {\beta }}}{\partial y_{i}}}=(\mathbf {X} ^{\top }\mathbf {X} )^{-1}{\boldsymbol {x}}_{i}}$ captures the potential for an observation to affect the regression parameters, and therefore ${\displaystyle (\mathbf {X} ^{\top }\mathbf {X} )^{-1}{\boldsymbol {x}}_{i}{\widehat {e}}_{i}}$ captures the actual influence of that observations' deviations from its fitted value on the regression parameters. The formula then divides by ${\displaystyle (1-h_{ii})}$ to account for the fact that we remove the observation rather than adjusting its value, reflecting the fact that removal changes the distribution of covariates more when applied to high-leverage observations (i.e. with outlier covariate values). Similar formulas arise when applying general formulas for statistical influences functions in the regression context.[9][10]

## Effect on residual variance

If we are in an ordinary least squares setting with fixed ${\displaystyle \mathbf {X} }$ and homoscedastic regression errors ${\displaystyle \varepsilon _{i},}$ ${\displaystyle {\boldsymbol {y}}=\mathbf {X} {\boldsymbol {\beta }}+{\boldsymbol {\varepsilon }};\ \ \operatorname {Var} ({\boldsymbol {\varepsilon }})=\sigma ^{2}\mathbf {I} }$, then the ${\displaystyle {i}^{th}}$ regression residual, ${\displaystyle e_{i}=y_{i}-{\widehat {y}}_{i}}$ has variance

${\displaystyle \operatorname {Var} (e_{i})=(1-h_{ii})\sigma ^{2}}$.

In other words, an observation's leverage score determines the degree of noise in the model's misprediction of that observation, with higher leverage leading to less noise. This follows from the fact that ${\displaystyle \mathbf {I} -\mathbf {H} }$ is idempotent and symmetric and ${\displaystyle {\widehat {\boldsymbol {y}}}=\mathbf {H} {\boldsymbol {y}}}$, hence, ${\displaystyle \operatorname {Var} ({\boldsymbol {e}})=\operatorname {Var} ((\mathbf {I} -\mathbf {H} ){\boldsymbol {y}})=(\mathbf {I} -\mathbf {H} )\operatorname {Var} ({\boldsymbol {y}})(\mathbf {I} -\mathbf {H} )^{\top }=\sigma ^{2}(\mathbf {I} -\mathbf {H} )^{2}=\sigma ^{2}(\mathbf {I} -\mathbf {H} )}$.

The corresponding studentized residual—the residual adjusted for its observation-specific estimated residual variance—is then

${\displaystyle t_{i}={e_{i} \over {\widehat {\sigma }}{\sqrt {1-h_{ii}\ }}}}$

where ${\displaystyle {\widehat {\sigma }}}$ is an appropriate estimate of ${\displaystyle \sigma }$.

## Partial leverage

Partial leverage (PL) is a measure of the contribution of the individual independent variables to the total leverage of each observation. That is, PL is a measure of how ${\displaystyle h_{ii}}$ changes as a variable is added to the regression model. It is computed as:

${\displaystyle \left(\mathrm {PL} _{j}\right)_{i}={\frac {\left(\mathbf {X} _{j\bullet [j]}\right)_{i}^{2}}{\sum _{k=1}^{n}\left(\mathbf {X} _{j\bullet [j]}\right)_{k}^{2}}}}$

where ${\displaystyle j}$ is the index of independent variable, ${\displaystyle i}$ is the index of observation and ${\displaystyle \mathbf {X} _{j\bullet [j]}}$ are the residuals from regressing ${\displaystyle \mathbf {X} _{j}}$ against the remaining independent variables. Note that the partial leverage is the leverage of the ${\displaystyle {i}^{th}}$ point in the partial regression plot for the ${\displaystyle {j}^{th}}$ variable. Data points with large partial leverage for an independent variable can exert undue influence on the selection of that variable in automatic regression model building procedures.

## Software implementations

Many programs and statistics packages, such as R, Python, etc., include implementations of Leverage.

Language/Program Function Notes
R hat(x, intercept = TRUE) or hatvalues(model, ...) See [1]