Hotelling's T-squared distribution

(Redirected from Multivariate testing)

In statistics Hotelling's T-squared distribution (T2) is a multivariate distribution proportional to the F-distribution and arises importantly as the distribution of a set of statistics which are natural generalizations of the statistics underlying Student's t-distribution. Hotelling's t-squared statistic (t2) is a generalization of Student's t-statistic that is used in multivariate hypothesis testing.[1]

Distribution

Motivation

The distribution arises in multivariate statistics in undertaking tests of the differences between the (multivariate) means of different populations, where tests for univariate problems would make use of a t-test. The distribution is named for Harold Hotelling, who developed it as a generalization of Student's t-distribution.[1]

Definition

If the vector pd1 is Gaussian multivariate-distributed with zero mean and unit covariance matrix N(p01,pIp) and pMp is a p x p matrix with unit scale matrix and m degrees of freedom with a Wishart distribution W(pIp,m), then the Quadratic form m(1dT p M−1pd1) has a Hotelling T2(p,m) distribution with dimensionality parameter p and m degrees of freedom.[2]

If a random variable X has Hotelling's T-squared distribution, ${\displaystyle X\sim T_{p,m}^{2}}$, then:[1]

${\displaystyle {\frac {m-p+1}{pm}}X\sim F_{p,m-p+1}}$

where ${\displaystyle F_{p,m-p+1}}$ is the F-distribution with parameters p and m−p+1.

Statistic

The definition of this multivariate sample statistic follows after it is motivated using a simpler problem.

Motivation

Let ${\displaystyle {\mathcal {N}}_{p}({\boldsymbol {\mu }},{\mathbf {\Sigma } })}$ denote a p-variate normal distribution with location ${\displaystyle {\boldsymbol {\mu }}}$ and known covariance ${\displaystyle {\mathbf {\Sigma } }}$. Let

${\displaystyle {\mathbf {x} }_{1},\dots ,{\mathbf {x} }_{n}\sim {\mathcal {N}}_{p}({\boldsymbol {\mu }},{\mathbf {\Sigma } })}$

be n independent identically distributed (iid) random variables, which may be represented as ${\displaystyle p\times 1}$ column vectors of real numbers. Define

${\displaystyle {\overline {\mathbf {x} }}={\frac {\mathbf {x} _{1}+\cdots +\mathbf {x} _{n}}{n}}}$

to be the sample mean with covariance ${\displaystyle {\mathbf {\Sigma } }_{\bar {\mathbf {x} }}={\mathbf {\Sigma } }/n}$. It can be shown that

${\displaystyle ({\bar {\mathbf {x} }}-{\boldsymbol {\mu }})'{\mathbf {\Sigma } }_{\bar {\mathbf {x} }}^{-1}({\bar {\mathbf {x} }}-{\boldsymbol {\mathbf {\mu } }})\sim \chi _{p}^{2},}$

where ${\displaystyle \chi _{p}^{2}}$ is the chi-squared distribution with p degrees of freedom.

Proof —

To show this use the fact that ${\displaystyle {\overline {\mathbf {x} }}\sim {\mathcal {N}}_{p}({\boldsymbol {\mu }},{\mathbf {\Sigma } }_{\bar {\mathbf {x} }})}$ derive the characteristic function of the random variable ${\displaystyle \mathbf {y} =n({\bar {\mathbf {x} }}-{\boldsymbol {\mu }})'{\mathbf {\Sigma } }^{-1}({\bar {\mathbf {x} }}-{\boldsymbol {\mathbf {\mu } }})}$. This is done below:

{\displaystyle {\begin{aligned}&\varphi _{\mathbf {y} }(\theta )=\operatorname {E} e^{i\theta \mathbf {y} },\\[5pt]={}&\operatorname {E} e^{i\theta n({\overline {\mathbf {x} }}-{\boldsymbol {\mu }})'{\mathbf {\Sigma } }^{-1}({\overline {\mathbf {x} }}-{\boldsymbol {\mathbf {\mu } }})}\\[5pt]={}&\int e^{i\theta n({\overline {\mathbf {x} }}-{\boldsymbol {\mu }})'{\mathbf {\Sigma } }^{-1}({\overline {\mathbf {x} }}-{\boldsymbol {\mathbf {\mu } }})}(2\pi )^{-p/2}|{\boldsymbol {\Sigma }}/n|^{-1/2}\,e^{-(1/2)n({\overline {\mathbf {x} }}-{\boldsymbol {\mu }})'{\boldsymbol {\Sigma }}^{-1}({\overline {\mathbf {x} }}-{\boldsymbol {\mu }})}\,dx_{1}\cdots dx_{p}\\[5pt]={}&\int (2\pi )^{-p/2}|{\boldsymbol {\Sigma }}/n|^{-1/2}\,e^{-(1/2)n({\overline {\mathbf {x} }}-{\boldsymbol {\mu }})'({\boldsymbol {\Sigma }}^{-1}-2i\theta {\boldsymbol {\Sigma }}^{-1})({\overline {\mathbf {x} }}-{\boldsymbol {\mu }})}\,dx_{1}\cdots dx_{p},\\[5pt]={}&|({\boldsymbol {\Sigma }}^{-1}-2i\theta {\boldsymbol {\Sigma }}^{-1})^{-1}/n|^{1/2}|{\boldsymbol {\Sigma }}/n|^{-1/2}\int (2\pi )^{-p/2}|({\boldsymbol {\Sigma }}^{-1}-2i\theta {\boldsymbol {\Sigma }}^{-1})^{-1}/n|^{-1/2}\,e^{-(1/2)n({\overline {\mathbf {x} }}-{\boldsymbol {\mu }})'({\boldsymbol {\Sigma }}^{-1}-2i\theta {\boldsymbol {\Sigma }}^{-1})({\overline {\mathbf {x} }}-{\boldsymbol {\mu }})}\,dx_{1}\cdots dx_{p},\end{aligned}}}[citation needed]
{\displaystyle {\begin{aligned}&=|(\mathbf {I} _{p}-2i\theta \mathbf {I} _{p})|^{-1/2},\\[5pt]&=(1-2i\theta )^{-p/2}.&\blacksquare \end{aligned}}}

Definition

The covariance matrix ${\displaystyle {\mathbf {\Sigma } }}$ used above is often unknown. Here we use instead the sample covariance:

${\displaystyle {\hat {\mathbf {\Sigma } }}={\frac {1}{n-1}}\sum _{i=1}^{n}(\mathbf {x} _{i}-{\overline {\mathbf {x} }})(\mathbf {x} _{i}-{\overline {\mathbf {x} }})'}$

where we denote transpose by an apostrophe. It can be shown that ${\displaystyle {\hat {\mathbf {\Sigma } }}}$ is a positive (semi) definite matrix and ${\displaystyle (n-1){\hat {\mathbf {\Sigma } }}}$ follows a p-variate Wishart distribution with n−1 degrees of freedom.[3] The sample covariance matrix of the mean reads ${\displaystyle {\hat {\mathbf {\Sigma } }}_{\overline {\mathbf {x} }}={\hat {\mathbf {\Sigma } }}/n}$.

The Hotelling's t-squared statistic is then defined as:[4]

${\displaystyle t^{2}=({\overline {\mathbf {x} }}-{\boldsymbol {\mu }})'{\hat {\mathbf {\Sigma } }}_{\overline {\mathbf {x} }}^{-1}({\overline {\mathbf {x} }}-{\boldsymbol {\mathbf {\mu } }})}$

Also, from the distribution,

${\displaystyle t^{2}\sim T_{p,n-1}^{2}={\frac {p(n-1)}{n-p}}F_{p,n-p},}$

where ${\displaystyle F_{p,n-p}}$ is the F-distribution with parameters p and n − p. In order to calculate a p-value (unrelated to p variable here), note that the distribution of ${\displaystyle t^{2}}$ equivalently implies that

${\displaystyle {\frac {n-p}{p(n-1)}}t^{2}\sim F_{p,n-p}.}$

Then, use the quantity on the left hand side to evaluate the p-value corresponding to the sample, which comes from the F-distribution.

Two-sample statistic

If ${\displaystyle {\mathbf {x} }_{1},\dots ,{\mathbf {x} }_{n_{x}}\sim N_{p}({\boldsymbol {\mu }},{\mathbf {V} })}$ and ${\displaystyle {\mathbf {y} }_{1},\dots ,{\mathbf {y} }_{n_{y}}\sim N_{p}({\boldsymbol {\mu }},{\mathbf {V} })}$, with the samples independently drawn from two independent multivariate normal distributions with the same mean and covariance, and we define

${\displaystyle {\overline {\mathbf {x} }}={\frac {1}{n_{x}}}\sum _{i=1}^{n_{x}}\mathbf {x} _{i}\qquad {\overline {\mathbf {y} }}={\frac {1}{n_{y}}}\sum _{i=1}^{n_{y}}\mathbf {y} _{i}}$

as the sample means, and

${\displaystyle {\hat {\mathbf {\Sigma } }}_{\mathbf {x} }={\frac {1}{n_{x}-1}}\sum _{i=1}^{n_{x}}(\mathbf {x} _{i}-{\overline {\mathbf {x} }})(\mathbf {x} _{i}-{\overline {\mathbf {x} }})'}$
${\displaystyle {\hat {\mathbf {\Sigma } }}_{\mathbf {y} }={\frac {1}{n_{y}-1}}\sum _{i=1}^{n_{y}}(\mathbf {y} _{i}-{\overline {\mathbf {y} }})(\mathbf {y} _{i}-{\overline {\mathbf {y} }})'}$

as the respective sample covariance matrices. Then

${\displaystyle {\hat {\mathbf {\Sigma } }}={\frac {(n_{x}-1){\hat {\mathbf {\Sigma } }}_{\mathbf {x} }+(n_{y}-1){\hat {\mathbf {\Sigma } }}_{\mathbf {y} }}{n_{x}+n_{y}-2}}}$

is the unbiased pooled covariance matrix estimate (an extension of pooled variance).

Finally, the Hotelling's two-sample t-squared statistic is

${\displaystyle t^{2}={\frac {n_{x}n_{y}}{n_{x}+n_{y}}}({\overline {\mathbf {x} }}-{\overline {\mathbf {y} }})'{\hat {\mathbf {\Sigma } }}^{-1}({\overline {\mathbf {x} }}-{\overline {\mathbf {y} }})\sim T^{2}(p,n_{x}+n_{y}-2)}$

Related concepts

It can be related to the F-distribution by[3]

${\displaystyle {\frac {n_{x}+n_{y}-p-1}{(n_{x}+n_{y}-2)p}}t^{2}\sim F(p,n_{x}+n_{y}-1-p).}$

The non-null distribution of this statistic is the noncentral F-distribution (the ratio of a non-central Chi-squared random variable and an independent central Chi-squared random variable)

${\displaystyle {\frac {n_{x}+n_{y}-p-1}{(n_{x}+n_{y}-2)p}}t^{2}\sim F(p,n_{x}+n_{y}-1-p;\delta ),}$

with

${\displaystyle \delta ={\frac {n_{x}n_{y}}{n_{x}+n_{y}}}{\boldsymbol {\nu }}'\mathbf {V} ^{-1}{\boldsymbol {\nu }},}$

where ${\displaystyle {\boldsymbol {\nu }}=\mathbf {{\overline {x}}-{\overline {y}}} }$ is the difference vector between the population means.

In the two-variable case, the formula simplifies nicely allowing appreciation of how the correlation, ${\displaystyle \rho }$, between the variables affects ${\displaystyle t^{2}}$. If we define

${\displaystyle d_{1}={\overline {x}}_{1}-{\overline {y}}_{1},\qquad d_{2}={\overline {x}}_{2}-{\overline {y}}_{2}}$

and

${\displaystyle s_{1}={\sqrt {W_{11}}}\qquad s_{2}={\sqrt {W_{22}}}\qquad \rho =W_{12}/(s_{1}s_{2})=W_{21}/(s_{1}s_{2})}$

then

${\displaystyle t^{2}={\frac {n_{x}n_{y}}{(n_{x}+n_{y})(1-r^{2})}}\left[\left({\frac {d_{1}}{s_{1}}}\right)^{2}+\left({\frac {d_{2}}{s_{2}}}\right)^{2}-2\rho \left({\frac {d_{1}}{s_{1}}}\right)\left({\frac {d_{2}}{s_{2}}}\right)\right]}$

Thus, if the differences in the two rows of the vector ${\displaystyle ({\overline {\mathbf {x} }}-{\overline {\mathbf {y} }})}$ are of the same sign, in general, ${\displaystyle t^{2}}$ becomes smaller as ${\displaystyle \rho }$ becomes more positive. If the differences are of opposite sign ${\displaystyle t^{2}}$ becomes larger as ${\displaystyle \rho }$ becomes more positive.

A univariate special case can be found in Welch's t-test.

More robust and powerful tests than Hotelling's two-sample test have been proposed in the literature, see for example the interpoint distance based tests which can be applied also when the number of variables is comparable with, or even larger than, the number of subjects.[5][6]

References

1. ^ a b c Hotelling, H. (1931). "The generalization of Student's ratio". Annals of Mathematical Statistics. 2 (3): 360–378. doi:10.1214/aoms/1177732979.
2. ^ Eric W. Weisstein, MathWorld
3. ^ a b Mardia, K. V.; Kent, J. T.; Bibby, J. M. (1979). Multivariate Analysis. Academic Press. ISBN 978-0-12-471250-8.
4. ^
5. ^ Marozzi, M. (2016). "Multivariate tests based on interpoint distances with application to magnetic resonance imaging". Statistical Methods in Medical Research. 25 (6): 2593–2610. doi:10.1177/0962280214529104. PMID 24740998.
6. ^ Marozzi, M. (2015). "Multivariate multidistance tests for high-dimensional low sample size case-control studies". Statistics in Medicine. 34 (9): 1511–1526. doi:10.1002/sim.6418. PMID 25630579.