# Welch's t-test

In statistics, Welch's t-test, or unequal variances t-test, is a two-sample location test which is used to test the hypothesis that two populations have equal means. It is named for its creator, Bernard Lewis Welch, is an adaptation of Student's t-test, and is more reliable when the two samples have unequal variances and/or unequal sample sizes. These tests are often referred to as "unpaired" or "independent samples" t-tests, as they are typically applied when the statistical units underlying the two samples being compared are non-overlapping. Given that Welch's t-test has been less popular than Student's t-test and may be less familiar to readers, a more informative name is "Welch's unequal variances t-test" — or "unequal variances t-test" for brevity.

## Assumptions

Student's t-test assumes that the sample means being compared for two populations are normally distributed, and that the populations have equal variances. Welch's t-test is designed for unequal population variances, but the assumption of normality is maintained. Welch's t-test is an approximate solution to the Behrens–Fisher problem.

## Calculations

Welch's t-test defines the statistic t by the following formula:

$t={\frac {\Delta {\overline {X}}}{s_{\Delta {\bar {X}}}}}={\frac {{\overline {X}}_{1}-{\overline {X}}_{2}}{\sqrt {{s_{{\bar {X}}_{1}}^{2}}+{s_{{\bar {X}}_{2}}^{2}}}}}\,$ $s_{{\bar {X}}_{i}}={s_{i} \over {\sqrt {N_{i}}}}\,$ where ${\overline {X}}_{i}$ and $s_{{\bar {X}}_{i}}$ are the $i^{\text{th}}$ sample mean and its standard error, for a given sample standard deviation, $s_{i}$ , and sample size $N_{i}$ . Unlike in Student's t-test, the denominator is not based on a pooled variance estimate.

The degrees of freedom $\nu$ associated with this variance estimate is approximated using the Welch–Satterthwaite equation:

$\nu \quad \approx \quad {\frac {\left(\;{\frac {s_{1}^{2}}{N_{1}}}\;+\;{\frac {s_{2}^{2}}{N_{2}}}\;\right)^{2}}{\quad {\frac {s_{1}^{4}}{N_{1}^{2}\nu _{1}}}\;+\;{\frac {s_{2}^{4}}{N_{2}^{2}\nu _{2}}}\quad }}.$ This expression can be simplified when $N_{1}=N_{2}$ :

$\nu \approx {\frac {s_{\Delta {\bar {X}}}^{4}}{\nu _{1}^{-1}s_{{\bar {X}}_{1}}^{4}+\nu _{2}^{-1}s_{{\bar {X}}_{2}}^{4}}}.$ Here, $\nu _{i}=N_{i}-1$ is the degrees of freedom associated with the i-th variance estimate.

The statistic is approximately from the t-distribution since we have an approximation of the chi-square distribution. This approximation is better done when both $N_{1}$ and $N_{2}$ are larger than 5.

## Statistical test

Once t and $\nu$ have been computed, these statistics can be used with the t-distribution to test one of two possible null hypotheses:

• that the two population means are equal, in which a two-tailed test is applied; or
• that one of the population means is greater than or equal to the other, in which a one-tailed test is applied.

The approximate degrees of freedom are rounded down to the nearest integer.[citation needed]

Welch's t-test is more robust than Student's t-test and maintains type I error rates close to nominal for unequal variances and for unequal sample sizes under normality. Furthermore, the power of Welch's t-test comes close to that of Student's t-test, even when the population variances are equal and sample sizes are balanced. Welch's t-test can be generalized to more than 2-samples, which is more robust than one-way analysis of variance (ANOVA).

It is not recommended to pre-test for equal variances and then choose between Student's t-test or Welch's t-test. Rather, Welch's t-test can be applied directly and without any substantial disadvantages to Student's t-test as noted above. Welch's t-test remains robust for skewed distributions and large sample sizes. Reliability decreases for skewed distributions and smaller samples, where one could possibly perform Welch's t-test.

## Examples

The following three examples compare Welch's t-test and Student's t-test. Samples are from random normal distributions using the R programming language.

For all three examples, the population means were $\mu _{1}=20$ and $\mu _{2}=22$ .

The first example is for unequal but near variances ($\sigma _{1}^{2}=7.9$ , $\sigma _{2}^{2}=3.8$ ) and equal sample sizes (${\textstyle N_{1}=N_{2}=15}$ ). Let A1 and A2 denote two random samples:

$A_{1}=\{27.5,21.0,19.0,23.6,17.0,17.9,16.9,20.1,21.9,22.6,23.1,19.6,19.0,21.7,21.4\}$ $A_{2}=\{27.1,22.0,20.8,23.4,23.4,23.5,25.8,22.0,24.8,20.2,21.9,22.1,22.9,20.5,24.4\}$ The second example is for unequal variances ($\sigma _{1}^{2}=9.0$ , $\sigma _{2}^{2}=0.9$ ) and unequal sample sizes ($N_{1}=10$ , $N_{2}=20$ ). The smaller sample has the larger variance:

{\begin{aligned}A_{1}&=\{17.2,20.9,22.6,18.1,21.7,21.4,23.5,24.2,14.7,21.8\}\\A_{2}&=\{21.5,22.8,21.0,23.0,21.6,23.6,22.5,20.7,23.4,21.8,20.7,21.7,21.5,22.5,23.6,21.5,22.5,23.5,21.5,21.8\}\end{aligned}} The third example is for unequal variances ($\sigma _{1}^{2}=1.4$ , $\sigma _{2}^{2}=17.1$ ) and unequal sample sizes ($N_{1}=10$ , $N_{2}=20$ ). The larger sample has the larger variance:

{\begin{aligned}A_{1}&=\{19.8,20.4,19.6,17.8,18.5,18.9,18.3,18.9,19.5,22.0\}\\A_{2}&=\{28.2,26.6,20.1,23.3,25.2,22.1,17.7,27.6,20.6,13.7,23.2,17.5,20.6,18.0,23.9,21.6,24.3,20.4,24.0,13.2\}\end{aligned}} Reference p-values were obtained by simulating the distributions of the t statistics for the null hypothesis of equal population means ($\mu _{1}-\mu _{2}=0$ ). Results are summarised in the table below, with two-tailed p-values:

Sample A1 Sample A2 Student's t-test Welch's t-test
Example $N_{1}$ ${\overline {X}}_{1}$ $s_{1}^{2}$ $N_{2}$ ${\overline {X}}_{2}$ $s_{2}^{2}$ $t$ $\nu$ $P$ $P_{\mathrm {sim} }$ $t$ $\nu$ $P$ $P_{\mathrm {sim} }$ 1 15 20.8 7.9 15 23.0 3.8 −2.46 28 0.021 0.021 −2.46 24.9 0.021 0.017
2 10 20.6 9.0 20 22.1 0.9 −2.10 28 0.045 0.150 −1.57 9.9 0.149 0.144
3 10 19.4 1.4 20 21.6 17.1 −1.64 28 0.110 0.036 −2.22 24.5 0.036 0.042

Welch's t-test and Student's t-test gave identical results when the two samples have identical variances and sample sizes (Example 1). But note that if you sample data from populations with identical variances, the sample variances will differ, as will the results of the two t-tests. So with actual data, the two tests will almost always give somewhat different results.

For unequal variances, Student's t-test gave a low p-value when the smaller sample had a larger variance (Example 2) and a high p-value when the larger sample had a larger variance (Example 3). For unequal variances, Welch's t-test gave p-values close to simulated p-values.

## Software implementations

Language/Program Function Documentation
LibreOffice TTEST(Data1; Data2; Mode; Type) 
MATLAB ttest2(data1, data2, 'Vartype', 'unequal') 
Microsoft Excel pre 2010 (Student's T Test) TTEST(array1, array2, tails, type) 
Microsoft Excel 2010 and later (Student's T Test) T.TEST(array1, array2, tails, type) 
SAS (Software) Default output from proc ttest (labeled "Satterthwaite")
Python (through 3rd-party library SciPy) scipy.stats.ttest_ind(a, b, equal_var=False) 
R t.test(data1, data2, alternative="two.sided", var.equal=FALSE) 
Haskell Statistics.Test.StudentT.welchTTest SamplesDiffer data1 data2 
JMP  Oneway( Y( YColumn), X( XColumn), Unequal Variances( 1 ) ); 
Julia  UnequalVarianceTTest(data1, data2) 
Stata ttest varname1 == varname2, welch 
Google Sheets TTEST(range1, range2, tails, type) 
GNU Octave welch_test(x, y)