Truncated mean

From Wikipedia, the free encyclopedia

A truncated mean or trimmed mean is a statistical measure of central tendency, much like the mean and median. It involves the calculation of the mean after discarding given parts of a probability distribution or sample at the high and low end, and typically discarding an equal amount of both. This number of points to be discarded is usually given as a percentage of the total number of points, but may also be given as a fixed number of points.

For most statistical applications, 5 to 25 percent of the ends are discarded. For example, given a set of 8 points, trimming by 12.5% would discard the minimum and maximum value in the sample: the smallest and largest values, and would compute the mean of the remaining 6 points. The 25% trimmed mean (when the lowest 25% and the highest 25% are discarded) is known as the interquartile mean.

The median can be regarded as a fully truncated mean and is most robust. As with other trimmed estimators, the main advantage of the trimmed mean is robustness and higher efficiency for mixed distributions and heavy-tailed distribution (like the Cauchy distribution), at the cost of lower efficiency for some other less heavily tailed distributions (such as the normal distribution). For intermediate distributions the differences between the efficiency of the mean and the median are not very big, e.g. for the student-t distribution with 2 degrees of freedom the variances for mean and median are nearly equal.


In some regions of Central Europe it is also known as a Windsor mean,[citation needed] but this name should not be confused with the Winsorized mean: in the latter, the observations that the trimmed mean would discard are instead replaced by the largest/smallest of the remaining values.

Discarding only the maximum and minimum is known as the modified mean, particularly in management statistics.[1] This is also known as the Olympic average (for example in US agriculture, like the Average Crop Revenue Election), due to its use in Olympic events, such as the ISU Judging System in figure skating, to make the score robust to a single outlier judge.[2]


When the percentage of points to discard does not yield a whole number, the trimmed mean may be defined by interpolation, generally linear interpolation, between the nearest whole numbers. For example, if you need to calculate the 15% trimmed mean of a sample containing 10 entries, strictly this would mean discarding 1 point from each end (equivalent to the 10% trimmed mean). If interpolating, one would instead compute the 10% trimmed mean (discarding 1 point from each end) and the 20% trimmed mean (discarding 2 points from each end), and then interpolating, in this case averaging these two values. Similarly, if interpolating the 12% trimmed mean, one would take the weighted average: weight the 10% trimmed mean by 0.8 and the 20% trimmed mean by 0.2.


The truncated mean is a useful estimator because it is less sensitive to outliers than the mean but will still give a reasonable estimate of central tendency or mean for many statistical models. In this regard it is referred to as a robust estimator. For example, in its use in Olympic judging, truncating the maximum and minimum prevents a single judge from increasing or lowering the overall score by giving an exceptionally high or low score.

One situation in which it can be advantageous to use a truncated mean is when estimating the location parameter of a Cauchy distribution, a bell shaped probability distribution with (much) fatter tails than a normal distribution. It can be shown that the truncated mean of the middle 24% sample order statistics (i.e., truncate the sample by 38% at each end) produces an estimate for the population location parameter that is more efficient than using either the sample median or the full sample mean.[3][4] However, due to the fat tails of the Cauchy distribution, the efficiency of the estimator decreases as more of the sample gets used in the estimate.[3][4] Note that for the Cauchy distribution, neither the truncated mean, full sample mean or sample median represents a maximum likelihood estimator, nor are any as asymptotically efficient as the maximum likelihood estimator; however, the maximum likelihood estimate is more difficult to compute, leaving the truncated mean as a useful alternative.[4][5]

Statistical tests[edit]

It is possible to perform a Student's t-test based on the truncated mean, which is called Yuen's t-test,[6][7] which also has several implementations in R.[8][9]


The scoring method used in many sports that are evaluated by a panel of judges is a truncated mean: discard the lowest and the highest scores; calculate the mean value of the remaining scores.[10]

The Libor benchmark interest rate is calculated as a trimmed mean: given 18 responses, the top 4 and bottom 4 are discarded, and the remaining 10 are averaged (yielding trim factor of 4/18 ≈ 22%).[11]

Consider the data set consisting of:

{92, 19, 101, 58, 1053, 91, 26, 78, 10, 13, −40, 101, 86, 85, 15, 89, 89, 28, −5, 41}       (N = 20, mean = 101.5)

The 5th percentile (−6.75) lies between −40 and −5, while the 95th percentile (148.6) lies between 101 and 1053 (values shown in bold). Then, a 5% trimmed mean would result in the following:

{92, 19, 101, 58, 91, 26, 78, 10, 13, 101, 86, 85, 15, 89, 89, 28, −5, 41}       (N = 18, mean = 56.5)

This example can be compared with the one using the Winsorising procedure.

See also[edit]


  1. ^ Arulmozhi, G.; Statistics For Management, 2nd Edition, Tata McGraw-Hill Education, 2009, p. 458
  2. ^ Paul E. Peterson (August 3, 2012). "Lessons from LIBOR". Once the quotes are compiled, LIBOR uses a trimmed mean process, in which the highest and lowest values are thrown out and the remaining values are averaged. This is sometimes called an "Olympic average" from its use in the Olympics to eliminate the impact of a biased judge on an athlete's final score.
  3. ^ a b Rothenberg, Thomas J.; Fisher, Franklin, M.; Tilanus, C.B. (1964). "A note on estimation from a cauchy sample". Journal of the American Statistical Association. 59 (306): 460–463. doi:10.1080/01621459.1964.10482170.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  4. ^ a b c Bloch, Daniel (1966). "A note on the estimation of the location parameters of the Cauchy distribution". Journal of the American Statistical Association. 61 (316): 852–855. doi:10.1080/01621459.1966.10480912. JSTOR 2282794.
  5. ^ Ferguson, Thomas S. (1978). "Maximum Likelihood Estimates of the Parameters of the Cauchy Distribution for Samples of Size 3 and 4". Journal of the American Statistical Association. 73 (361): 211–213. doi:10.1080/01621459.1978.10480031. JSTOR 2286549.
  6. ^ Yuen, K.K. (1974) The two-sample trimmed t for unequal population variances. Biometrika, 61, 165-170.
  7. ^ Wilcox, R.R. (2005). Introduction to robust estimation and hypothesis testing. Academic Press.
  8. ^ "WRS2: A Collection of Robust Statistical Methods". 20 July 2021.
  9. ^ "DescTools: Tools for Descriptive Statistics". 9 September 2021.
  10. ^ Bialik, Carl (27 July 2012). "Removing Judges' Bias Is Olympic-Size Challenge". The Wall Street Journal. Retrieved 7 September 2014.
  11. ^ "bbalibor: The Basics". The British Bankers' Association.