# Harmonic mean

Three dimensional plot showing the values of the harmonic mean between two numbers.

In mathematics, the harmonic mean (sometimes called the subcontrary mean) is one of several kinds of mean and hence one of several kinds of average. Typically, it is appropriate for situations when the average of rates is desired.

It is the special case (M−1) of the power mean. As it tends strongly toward the least elements of the list, it may (compared to the arithmetic mean) mitigate the influence of large outliers and increase the influence of small values.

The harmonic mean is one of the Pythagorean means, along with the arithmetic mean and the geometric mean, and is no greater than either of them.

## Definition

### Discrete distribution

The harmonic mean H of the positive real numbers x1, x2, ..., xn is defined to be the reciprocal of the arithmetic mean of the reciprocals of x1, x2, ..., xn:

$H = \left(\frac{1}{n} \cdot \sum_{ i = 1 }^n x_i^{-1} \right)^{-1} = \frac{1}{\frac{1}{n} \cdot \left(\frac{ 1 }{ x_1 } + \frac{ 1 }{ x_2 } + \cdots + \frac{ 1 }{ x_n }\right)} = \frac{ n }{ \frac{ 1 }{ x_1 } + \frac{ 1 }{ x_2 } + \cdots + \frac{ 1 }{ x_n } }.$
Example

The harmonic mean of 1, 2, and 4 is

$\frac{ 3 }{ \frac{ 1 }{ 1 } + \frac{ 1 }{ 2 } + \frac{ 1 }{ 4 } } = \frac{ 1 } { \frac{ 1 }{ 3 }( \frac{ 1 }{ 1 } + \frac{ 1 }{ 2 } + \frac{ 1 }{ 4 } ) } = \frac{ 12 }{ 7 } = 1.\overline{714285}.$

### Continuous distribution

For a continuous distribution the harmonic mean is

$H = \frac{ 1 }{ \int \frac{ 1 }{ x } f( x ) \,dx }.$

### Weighted harmonic mean

If a set of weights $w_1, \dotsc, w_n$ is associated with the dataset $x_1, \dotsc, x_n$, the weighted harmonic mean is defined by

$\frac{ \sum_{ i = 1 }^n w_i }{ \sum_{ i = 1 }^n \frac{ w_i }{ x_i} }.$

The harmonic mean is a special case where all of the weights are equal to 1. It is also equivalent to any weighted harmonic mean where all weights are equal.

### Recursive calculation

It is possible to recursively calculate the harmonic mean (H) of n variates. This method may be of use in computations.

$H( x_1, x_2, x_3, \dotsc ) = \frac{ n }{ \sum \frac{ 1 } { x_i} } = \left( \frac{ 1 }{ n }x_1^{ -1 } + \frac{ n - 1 }{ n } H( x_2, x_3, \dotsc)^{ -1 } \right)^{ -1 }$

## Harmonic mean of two numbers

For the special case of just two numbers $x_1$ and $x_2$, the harmonic mean can be written

$H = \frac{ 2 x_1 x_2 }{ x_1 + x_2 }.$

In this special case, the harmonic mean is related to the arithmetic mean $A = \frac{x_1 + x_2}{2}$ and the geometric mean $G = \sqrt{x_1 x_2},$ by

$H = \frac{ G^2 } { A }.$

So

$G = \sqrt{ A H }$

meaning the two numbers' geometric mean equals the geometric mean of their arithmetic and harmonic means.

As noted above this relationship between the three Pythagorean means is not limited to n equals 1 or 2; there is a relationship for all n. However, for n = 1 all means are equal and for n = 2 we have the above relationship between the means. For arbitrary n ≥ 2 we may generalize this formula, as noted above, by interpreting the third equation for the harmonic mean differently. The generalized relationship was already explained above. The third equation also works for n = 1. That is, it predicts the equivalence between the harmonic and geometric means but it falls short by not predicting the equivalence between the harmonic and arithmetic means.

The general formula, which can be derived from the third formula for the harmonic mean by the reinterpretation as explained in relationship with other means, is

$H( x_1, \ldots , x_n ) = \frac{ ( G( x_1, \ldots , x_n ) )^n }{ A( x_2x_3 \cdots x_n, x_1x_3 \cdots x_n, \ldots , x_1x_2 \cdots x_{ n - 1 })} = \frac{ ( G( x_1, \ldots , x_n ))^n }{ A \left( \frac{ \prod_{ i = 1 }^n x_i }{ x_1 }, \frac{ \prod_{ i = 1 }^n x_i }{ x_2 }, \ldots , \frac{ \prod_{ i = 1 }^n x_i }{ x_n } \right ) }.$

For n = 2,

$H( x_1, x_2 )= \frac{ ( G( x_1, x_2 ) )^2 }{ A( x_2, x_1 ) } = \frac{ ( G( x_2, x_1 ) )^2}{ A( x_2, x_1 ) }$

where we used the fact that the arithmetic mean evaluates to the same number independent of the order of the terms. This equation can be reduced to the original equation if we reinterpret this result in terms of the operators themselves. If we do this we get the symbolic equation

$H = \frac { G^2 } { A }$

because each function was evaluated at

$( x_1, x_2 ).$

## Relationship with other means

A geometric construction of the three Pythagorean means of two numbers, a and b. Harmonic mean is denoted by H in purple color. The Q denotes a fourth mean, the quadratic mean.

If a set of non-identical numbers is subjected to a mean-preserving spread — that is, two or more elements of the set are "spread apart" from each other while leaving the arithmetic mean unchanged — then the harmonic mean always decreases.[1]

Let r be a non zero real number and let the rth power mean ( Mr ) of a series of real variables ( a1, a2, a3, ... ) be defined as

$M^r( a_1, a_2, a_3, \dotsc ) = \left( \frac{ 1 }{ n } \sum ( a_i )^r \right)^{ \frac{ 1 } { r } }.$

For r = -1, 1 and 2 we have the harmonic, the arithmetic and the quadratic means respectively. Define r = 0, -∞ and +∞ to be the geometric mean, the minimum of the variates and the maximum of the variates respectively. Then for any two real numbers s and t such that s < t we have

$M^s( a_1, a_2, a_3, \dotsc ) \le M^t( a_1, a_2, a_3, \dotsc ).$

with equality only if all the ai are equal.

Let R be the quadratic mean (or root mean square). Then[2]

$\frac{ 2 R + H }{ 3 } \le A .$

## Inequalities

For a set of positive real variables lying within the interval [ m, M ] it has been shown that

$A - H \ge \frac{ s^2 } { 2M }$

where A is the arithmetic mean, H is the harmonic mean, M is the maximum of the interval and s2 is the variance of the set.[3]

Several other inequalities are also known:[4]

$\frac{ m ( A - m ) ( A - H ) }{ H - m } \le s^2 \le \frac{ M ( A - H ) ( M - A ) }{ M - H }$
$\frac{ ( M - s )^2 } { M ( M - 2s ) } \le \frac{ A } { H } \le \frac{ ( m + s )^2 }{ m ( m + 2s ) }$
$\frac{ ( M - m ) s^2 } { M ( M - m ) - s^2 } \le A - H \le \frac{ ( M - m ) s^2 } { m ( M - m ) + s^2 }$

## Examples

### Geometry

In any triangle, the radius of the incircle is one-third the harmonic mean of the altitudes.

For any point P on the minor arc BC of the circumcircle of an equilateral triangle ABC, with distances q and t from B and C respectively, and with the intersection of PA and BC being at a distance y from point P, we have that y is half the harmonic mean of q and t.[5]

In a right triangle with legs a and b and altitude h from the hypotenuse to the right angle, h2 is half the harmonic mean of a2 and b2.[6][7]

Let t and s (t > s) be the sides of the two inscribed squares in a right triangle with hypotenuse c. Then s2 equals half the harmonic mean of c2 and t2.

Let a trapezoid have vertices A, B, C, and D in sequence and have parallel sides AB and CD. Let E be the intersection of the diagonals, and let F be on side DA and G be on side BC such that FEG is parallel to AB and CD. Then FG is the harmonic mean of AB and DC. (This is provable using similar triangles.)

Crossed ladders. h is half the harmonic mean of A and B

In the crossed ladders problem, two ladders lie oppositely across an alley, each with feet at the base of one sidewall, with one leaning against a wall at height A and the other leaning against the opposite wall at height B, as shown. The ladders cross at a height of h above the alley floor. Then h is half the harmonic mean of A and B. This result still holds if the walls are slanted but still parallel and the "heights" A, B, and h are measured as distances from the floor along lines parallel to the walls.

In an ellipse, the semi-latus rectum (the distance from a focus to the ellipse along a line parallel to the minor axis) is the harmonic mean of the maximum and minimum distances of the ellipse from a focus.

### Trigonometry

In the case of the double-angle tangent identity, if the tangent of an angle A is given as a / b then the tangent of 2A is the product of

• (1) the harmonic mean of the numerator and denominator of tan A; and
• (2) the reciprocal of the denominator less the numerator of tan A.

In symbols if a and b are real numbers and

$\tan A = \frac{ a }{ b }$

the double angle formula for the tangent can be written as

$\tan 2A = H( a, b ) \cdot \frac{ 1 }{ b - a } = \frac{ 2 a b }{ a + b } \cdot \frac{ 1 }{ b - a }$

where H( a, b ) is the harmonic mean of a and b.

Example

Let

$\tan A = \frac{ 3 }{ 7 }$

The harmonic mean of 3 and 7 is

$H( 3,7 ) = \frac{ 42 }{ 10 } = 4.2$

The most familiar form of the double angle formula is

$\tan 2A = \frac{ 2 \cdot \frac{ 3 }{ 7 }}{ 1 - ( \frac{ 3 }{ 7 } )^2 }= \frac{ 21 }{ 20 };$

The double angle formula can also be written as

$\frac{ 2 \cdot 3 \cdot 7 }{ 3 + 7 } \cdot \frac{ 1 }{ 7 - 3 } = \frac{ 42 }{ 10 } \cdot \frac{ 1 }{ 4 } = \frac{ 21 }{ 20 } = 1.05$

### Algebra

The harmonic mean also features in elementary algebra when considering problems of working in parallel.

For example, if a gas powered pump can drain a pool in 4 hours and a battery powered pump can drain the same pool in 6 hours, then it will take both pumps

$( 6 \cdot 4 ) / ( 6 + 4 ) = \frac {1}{2} H( 4, 6 ) = 2.4$

hours to drain the pool working together.

Another example involves calculating the average speed for a number of fixed-distance trips. For example, if the speed for going from point A to B was 60 km/h, and the speed for returning from B to A was 40 km/h, then the average speed is given by

$\frac{2}{1/60+1/40}=48.$

### Physics

In certain situations, especially many situations involving rates and ratios, the harmonic mean provides the truest average. For instance, if a vehicle travels a certain distance at a speed x (e.g. 60 kilometres per hour) and then the same distance again at a speed y (e.g. 40 kilometres per hour), then its average speed is the harmonic mean of x and y (48 kilometres per hour), and its total travel time is the same as if it had traveled the whole distance at that average speed. However, if the vehicle travels for a certain amount of time at a speed x and then the same amount of time at a speed y, then its average speed is the arithmetic mean of x and y, which in the above example is 50 kilometres per hour. The same principle applies to more than two segments: given a series of sub-trips at different speeds, if each sub-trip covers the same distance, then the average speed is the harmonic mean of all the sub-trip speeds, and if each sub-trip takes the same amount of time, then the average speed is the arithmetic mean of all the sub-trip speeds. (If neither is the case, then a weighted harmonic mean or weighted arithmetic mean is needed.)

Similarly, if one connects two electrical resistors in parallel, one having resistance x (e.g. 60Ω) and one having resistance y (e.g. 40Ω), then the effect is the same as if one had used two resistors with the same resistance, both equal to the harmonic mean of x and y (48Ω): the equivalent resistance in either case is 24Ω (one-half of the harmonic mean). However, if one connects the resistors in series, then the average resistance is the arithmetic mean of x and y (with total resistance equal to the sum of x and y). And, as with previous example, the same principle applies when more than two resistors are connected, provided that all are in parallel or all are in series.

The weighted harmonic mean is the correct approach to determine the density of a mixture when the composition by weight is known. Note however that this is only correct for ideal solutions.

### Other sciences

In computer science, specifically information retrieval and machine learning, the harmonic mean of the precision and the recall is often used as an aggregated performance score for the evaluation of algorithms and systems: the F-score (or F-measure).

In hydrology the harmonic mean is used to average hydraulic conductivity values for flow that is perpendicular to layers (e.g. geologic or soil) while flow parallel to layers uses the arithmetic mean. This apparent difference in averaging is explained by the fact that hydrology uses conductivity, which is the inverse of resistivity.

In population genetics the harmonic mean is used when calculating the effects of fluctuations in generation size on the effective breeding population. This is to take into account the fact that a very small generation is effectively like a bottleneck and means that a very small number of individuals are contributing disproportionately to the gene pool which can result in higher levels of inbreeding.

When considering fuel economy in automobiles two measures are commonly used – miles per gallon (mpg), and litres per 100 km. As the dimensions of these quantities are the inverse of each other (one is distance per volume, the other volume per distance) when taking the mean value of the fuel-economy of a range of cars one measure will produce the harmonic mean of the other – i.e. converting the mean value of fuel economy expressed in litres per 100 km to miles per gallon will produce the harmonic mean of the fuel economy expressed in miles-per-gallon.

### Finance

The harmonic mean is the preferable method for averaging multiples, such as the price/earning ratio, in which price is in the numerator. If these ratios are averaged using an arithmetic mean (a common error), high data points are given greater weights than low data points. The harmonic mean, on the other hand, gives equal weight to each data point.[8]

## Statistics

For a random sample the harmonic mean is calculated as above. Both the mean and the variance may be infinite (if it includes at least one term of the form 1/0).

### Theoretical value

The variance of the harmonic mean is[9]

$\operatorname{Var}\left( \frac { 1 } { x } \right) = \frac { m \left[ \operatorname{E}( 1 / x - 1 ) \right] } { n m^2 }$

where m is the arithmetic mean of the reciprocals, x are the variates, n is the population size and E is the expectation operator. Asymptotically E(1 / x) is distributed normally.

The mean of the sample m is also distributed normally with variance s2.

$s^2 = \frac { m [ \operatorname{E}( 1 / x - 1 ) ] }{ m^2 n }$

### Delta method

Assuming that the variance is not infinite and that the central limit theorem applies to the sample then using the delta method, the variance is

$\operatorname{Var}( H ) = \frac { 1 }{ n }\frac{ s^2 } { m^4 }$

where H is the harmonic mean, m is the arithmetic mean of the reciprocals

$m = \frac{ 1 } { n } \sum{ \frac{ 1 } { x } } .$

s2 is the variance of the reciprocals of the data

$s^2 = \operatorname{Var}\left( \frac { 1 } { x } \right)$

and n is the number of data points in the sample.

### Jackknife method

A jackknife method of estimating the variance is possible if the mean is known.[10] This method is the usual 'delete 1' rather than the 'delete m' version.

This method first requires the computation of the mean of the sample (m)

$m = \frac{ n }{ \sum { \frac{ 1 }{ x } } }$

where x are the sample values.

A series of value wi is then computed where

$w_i = \frac{ n - 1 }{ \sum_{j \neq i} { \frac{ 1 }{ x } } }.$

The mean (h) of the wi is then taken:

$h = \frac{ 1 }{ n } \sum{ w_i }$

The variance of the mean is

$\frac{ n - 1 }{ n } \sum{ ( m - h ) }^2 .$

Significance testing and confidence intervals for the mean can then be estimated with the t test.

### Size biased sampling

Assume a random variate has a distribution f( x ). Assume also that the likelihood of a variate being chosen is proportional to its value. This is known as length based or size biased sampling.

Let μ be the mean of the population. Then the probability density function f*( x ) of the size biased population is

$f^*(x) = \frac{ x f( x ) }{ \mu }$

The expectation of this length biased distribution E*( x ) is[9]

$\operatorname{E}^*( x ) = \mu \left[ 1 + \frac{ \sigma^2 }{ \mu^2 } \right]$

where σ2 is the variance.

The expectation of the harmonic mean is the same as the non length biased version E( x )

$\operatorname{E}^*\left( \frac{ 1 }{ x } \right) = \operatorname{E}\left( \frac{ 1 }{ x } \right)$

The problem of length biased sampling arises in a number of areas including textile manufacture[11] pedigree analysis[12] and survival analysis[13]

Akman et al have developed a test for the detection of length based bias in samples.[14]

### Shifted variables

If X is a positive random variable and q > 0 then for all ε > 0[15]

$\operatorname{Var}\left[ \frac{ 1 }{( X + \epsilon )^q } \right] < \operatorname{Var}\left( \frac{ 1 }{ X^q } \right) .$

### Moments

Assuming that X and E(X) are > 0 then[15]

$\operatorname{E}\left[ \frac{ 1 }{ X } \right] \ge \frac{ 1 }{ \operatorname{E}( X ) }$

This follows from Jensen's inequality.

Gurland has shown that[16] for a distribution that takes only positive values, for any n > 0

$\operatorname{E}( X^{ -1 } ) \ge \frac{ \operatorname{E}( X^{ n - 1 } ) }{ \operatorname{E}( X^n ) } .$

Under some conditions[17]

$\operatorname{E}( a + X )^{ -n } \sim \operatorname{E}( a + X^{ -n } )$

where ~ means approximately.

## Lognormal distribution

The harmonic mean ( H ) of a lognormal distribution is[18]

$H = \exp \left( \mu -\frac{ 1 }{ 2 } \sigma^2 \right)$

where μ is the arithmetic mean and σ2 is the variance of the distribution.

The harmonic and arithmetic means are related

$\frac{ \mu }{ H } = 1 + C_v$

where Cv is the coefficient of variation.

The geometric ( G ), arithmetic and harmonic means are related[19]

$H \mu = G^2$

### Sampling properties

Assuming that the variates (x) are drawn from a lognormal distribution there are several possible estimators for H:

$H_1 = \frac{ n }{ \sum( \frac{ 1 }{ x } ) }$

$H_2 = \frac{ [ \exp( \frac{ 1 }{ n } \sum \log_e( x ) ) ]^2 }{ \frac{ 1 }{ n } \sum( x ) }$

$H_3 = \exp \left( m - \frac{ 1 }{ 2 } s^2 \right)$

where

$m = \frac{ 1 }{ n } \sum \log_e( x )$
$s^2 = \frac{ 1 }{ n } \sum ( \log_e( x ) - m )^2$

Of these H3 is probably the best estimator for samples of 25 or more.[20]

### Bias and variance estimators

A first order approximation to the bias and variance of H1 are[21]

$\operatorname{bias}[ H_1 ] = \frac{ H C_v }{ n }$
$\operatorname{Var}[ H_1 ] = \frac{ H^2 C_v }{ n }$

where Cv is the coefficient of variation.

Similarly a first order approximation to the bias and variance of H3 are[21]

$\frac{ H \log_e( 1 + C_v ) }{ 2n } \left[ 1 + \frac{ 1 + C_v^2 }{ 2 } \right]$
$\frac{ H \log_e( 1 + C_v ) }{ n } \left[ 1 + \frac{ 1 + C_v^2 }{ 4 } \right]$

It has been found in numerical experiments that H3 is generally a superior estimator of the harmonic mean than H1.[21] H2 produces estimates that are largely similar to H1.

## Pareto distribution

The harmonic mean of a type 1 Pareto distribution is[22]

$H = k \left( 1 + \frac{ 1 }{ \alpha } \right)$

where k is the scale parameter and α is the shape parameter.

## Beta distribution

Harmonic mean for Beta distribution for 0 < α < 5 and 0 < β < 5
(Mean - HarmonicMean) for Beta distribution versus alpha and beta from 0 to 2
Harmonic Means for Beta distribution Purple=H(X), Yellow=H(1-X), smaller values alpha and beta in front
Harmonic Means for Beta distribution Purple=H(X), Yellow=H(1-X), larger values alpha and beta in front

The harmonic mean of a beta distribution with shape parameters α and β is:

$H = \frac{ \alpha - 1 }{ \alpha + \beta - 1 } \text{ conditional on } \alpha > 1 \, \, \& \, \, \beta > 0$

The harmonic mean with α < 1 is undefined because its defining expression is not bounded in [ 0, 1 ].

Letting α = β

$H = \frac{ \alpha - 1 }{ 2 \alpha -1 }$

showing that for α = β the harmonic mean ranges from 0 for α = β = 1, to 1/2 for α = β → ∞.

The following are the limits with one parameter finite (non zero) and the other parameter approaching these limits:

$\lim_{ \alpha \to 0 } H = \text{ undefined }$
$\lim_{ \alpha \to 1 } H = \lim_{ \beta \to \infty } H = 0$
$\lim_{ \beta \to 0 } H = \lim_{ \alpha \to \infty } H = 1$

With the geometric mean the harmonic mean may be useful in maximum likelihood estimation in the four parameter case.

A second harmonic mean (H1 - X) also exists for this distribution

$H_{ 1 - X } = \frac{ \beta - 1 }{ \alpha + \beta - 1 } \text{ conditional on } \beta > 1 \, \, \& \, \, \alpha > 0$

This harmonic mean with β < 1 is undefined because its defining expression is not bounded in [ 0, 1 ].

Letting α = β in the above expression

$H_{ 1 - X } = \frac{ \beta - 1 }{ 2 \beta - 1 }$

showing that for α = β the harmonic mean ranges from 0, for α = β = 1, to 1/2, for α = β → ∞.

The following are the limits with one parameter finite (non zero) and the other approaching these limits:

$\lim_{ \beta \to 0 } H_{ 1 - X } = \text{ undefined }$
$\lim_{ \beta\to 1} H_{ 1 - X } = \lim_{ \alpha \to \infty } H_{ 1 - X } = 0$
$\lim_{ \alpha \to 0} H_{ 1 - X } = \lim_{ \beta \to \infty } H_{ 1 - X } = 1$

Although both harmonic means are asymmetric, when α = β the two means are equal.

## Notes

The Environmental Protection Agency recommend the use of the harmonic mean in setting maximum toxin levels in water.[23]

In geophysical reservoir engineering studies, the harmonic mean is widely used.[24]

The F1 score is the harmonic mean of precision and recall.

In sabermetrics, the power-speed number of a player is the harmonic mean of his home run and stolen base totals.

## References

1. ^ Mitchell DW (2004) More on spreads and non-arithmetic means. The Mathematical Gazette 88: 142-144
2. ^ Taneja IJ (2012) Inequalities having seven means and proportionality relations. arXiv:1203.2288v1 [math.HO] 8 Mar 2012
3. ^ Mercer A McD, (2000) Bounds for A-G, A-H, G-H, and a family of inequalities of Ky Fan's type, using a general method. J Math Anal Appl 243: 163–173
4. ^ Sharma R (2008) Some more inequalities for arithmetic mean, harmonic mean and variance. J Math Inequal 2 (1) 109–114
5. ^ Posamentier, Alfred S., and Salkind, Charles T., Challenging Problems in Geometry, second edition, Dover Publ. Co., 1996, p 172.
6. ^ Voles, Roger, "Integer solutions of $a^{-2}+b^{-2}=d^{-2}$," Mathematical Gazette 83, July 1999, 269–271.
7. ^ Richinick, Jennifer, "The upside-down Pythagorean Theorem," Mathematical Gazette 92, July 2008, 313–;317.
8. ^ "Fairness Opinions: Common Errors and Omissions", The Handbook of Business Valuation and Intellectual Property Analysis, McGraw Hill, 2004. ISBN 0-07-142967-0
9. ^ a b Zelen M (1972) Length-biased sampling and biomedical problems. In Biometric Society Meeting, Dallas, Texas
10. ^ Lam FC (1985) Estimate of variance for harmonic mean half lives. J Pharm Sci 74(2) 229-231
11. ^ Cox DR (1969) Some sampling problems in technology. In: New developments in survey sampling. U L Johnson, H Smith eds. New York: Wiley Interscience
12. ^ Davidov O, Zelen M (2001) Referent sampling, family history and relative risk: the role of length‐biased sampling. Biostat 2(2): 173-181 doi: 10.1093/biostatistics/2.2.173
13. ^ Zelen M, Feinleib M (1969) On the theory of screening for chronic diseases. Boimetrika 56: 601-614
14. ^ Akman O, Gamage J, Jannot J, Juliano S, Thurman A, Whitman D (2007) A simple test for detection of length-biased sampling. J Biostats 1 (2) 189-195
15. ^ a b Chuen-Teck See, Chen J (2008) Convex functions of random variables. J Inequal Pure Appl Math 9 (3) Art 80
16. ^ Gurland J (1967) An inequality satisfied by the expectation of the reciprocal of a random variable. The American Statistician. 21 (2) 24
17. ^ Sung SH (2010) On inverse moments for a class of nonnegative random variables. J Inequal Applic doi:10.1155/2010/823767
18. ^ Aitchison J, Brown JAC (1969). The lognormal distribution with special reference to its uses in economics. Cambridge University Press, New York
19. ^ Rossman LA (1990) Design stream flows based on harmonic means. J Hydr Eng ASCE 116(7) 946–950
20. ^ Stedinger JR (1980) Fitting lognormal distributions to hydrologic data. Water Resour Res 16(3) 481–490
21. ^ a b c Limbrunner JF, Vogel RM, Brown LC (2000) Estimation of harmonic mean of a lognormal variable. J Hydrol Eng 5(1) 59-66 [1]
22. ^ Johnson NL, Kotz S, Balakrishnan N (1994) Continuous univariate distributions Vol 1. Wiley Series in Probability and Statistics.
23. ^ EPA (1991) Technical support document for water quality-based toxics control. EPA/505/2-90-001. Office of Water
24. ^ Muskat M (1937) The flow of homogeneous fluids through porous media. McGraw-Hill, New York