# Freedman–Diaconis rule

In statistics, the Freedman–Diaconis rule can be used to select the width of the bins to be used in a histogram.[1] It is named after David A. Freedman and Persi Diaconis.

For a set of empirical measurements sampled from some probability distribution, the Freedman–Diaconis rule is designed approximately minimize the integral of the squared difference between the histogram (i.e., relative frequency density) and the density of the theoretical probability distribution.

In detail, the Integrated Mean Squared Error (IMSE) is

${\displaystyle {\text{IMSE}}=E\left[\int _{I}(H(x)-f(x))^{2}\right]}$

where ${\displaystyle H}$ is the histogram approximation of ${\displaystyle f}$ on the interval ${\displaystyle I}$ computed with ${\displaystyle n}$ data points sampled from the distribution ${\displaystyle f}$. ${\displaystyle E[\cdot ]}$ denotes the expectation across many independent draws of ${\displaystyle n}$ data points. Under mild conditions, namely that ${\displaystyle f}$ and its first two derivatives are ${\displaystyle L^{2}}$, Freedman and Diaconis show that the integral is minimised by choosing the bin width

${\displaystyle h^{*}=\left(6/\int _{-\infty }^{\infty }f'(x)^{2}dx\right)^{1/3}n^{-1/3}}$

A formula which was derived earlier by Scott.[2] Swapping the order of the integration and expectation is justified by Fubini's Theorem. The Freedman–Diaconis rule is derived by assuming that ${\displaystyle f}$ is a Normal distribution, making it an example of a normal reference rule. In this case ${\displaystyle \int f'(x)^{2}=(4{\sqrt {\pi }}\sigma ^{3})^{-1}}$.[3]

Freedman and Diaconis use the interquartile range to estimate the standard deviation: ${\displaystyle \sigma \sim \phi ^{-1}(0.75)-\phi ^{-1}(0.25)}$[4] where ${\displaystyle \Phi }$ is the cumulative distribution function for a normal density. This gives the rule

${\displaystyle {\text{Bin width}}=2\,{{\text{IQR}}(x) \over {\sqrt[{3}]{n}}}}$

where ${\displaystyle \operatorname {IQR} (x)}$ is the interquartile range of the data and ${\displaystyle n}$ is the number of observations in the sample ${\displaystyle x}$. In fact if the normal density is used the factor 2 in front comes out to be ${\displaystyle \sim 2.59}$,[4] but 2 is the factor recommended by Freedman and Diaconis.

## Other approaches

With the factor 2 replaced by approximately 2.59, the Freedman–Diaconis rule asymptotically matches Scott's Rule for data sampled from a normal distribution.

Another approach is to use Sturges's rule: use a bin width so that there are about ${\displaystyle 1+\log _{2}n}$ non-empty bins, however this approach is not recommended when the number of data points is large.[4] For a discussion of the many alternative approaches to bin selection, see Birgé and Rozenholc.[5]

## References

1. ^ Freedman, David; Diaconis, Persi (December 1981). "On the histogram as a density estimator: L2 theory". Probability Theory and Related Fields. 57 (4): 453–476. CiteSeerX 10.1.1.650.2473. doi:10.1007/BF01025868. ISSN 0178-8051. S2CID 14437088.
2. ^ D.W. Scott (1979). "On optimal and data-based histograms". Biometrika. 66 (3): 605–610. doi:10.1093/biomet/66.3.605. JSTOR 2335182.
3. ^ Scott, D.W. (2009). "Sturges' rule". WIREs Computational Statistics. 1 (3): 303–306. doi:10.1002/wics.35. S2CID 197483064.
4. ^ a b c D.W. Scott (2010). "Scott's Rule". Wiley Interdisciplinary Reviews: Computational Statistics. 2 (4). Wiley: 497–502. doi:10.1002/wics.103.
5. ^ Birgé, L.; Rozenholc, Y. (2006). "How many bins should be put in a regular histogram". ESAIM: Probability and Statistics. 10: 24–45. CiteSeerX 10.1.1.3.220. doi:10.1051/ps:2006001.