# Bennett's inequality

In probability theory, Bennett's inequality provides an upper bound on the probability that the sum of independent random variables deviates from its expected value by more than any specified amount. Bennett's inequality was proved by George Bennett of the University of New South Wales in 1962.[1]

Let X1, … Xn be independent random variables, and assume (for simplicity but without loss of generality) they all have zero expected value. Further assume |Xi| ≤ a almost surely for all i, and let

$\sigma^2 = \frac1n \sum_{i=1}^n \operatorname{Var}(X_i).$

Then for any t ≥ 0,

$\Pr\left( \sum_{i=1}^n X_i > t \right) \leq \exp\left( - \frac{n\sigma^2}{a^2} h\left(\frac{at}{n\sigma^2} \right)\right),$

where h(u) = (1 + u)log(1 + u) – u.[2]

See also Freedman (1975) [3] and Fan et al. (2012) [4] for a martingale version of Bennett's inequality and its improvement, respectively.