Marcinkiewicz–Zygmund inequality

In mathematics, the Marcinkiewicz–Zygmund inequality, named after Józef Marcinkiewicz and Antoni Zygmund, gives relations between moments of a collection of independent random variables. It is a generalization of the rule for the sum of variances of independent random variables to moments of arbitrary order.

Statement of the inequality

Theorem [1][2] If ${\displaystyle \textstyle x_{i}}$, ${\displaystyle \textstyle i=1,\ldots ,n}$, are independent random variables such that ${\displaystyle \textstyle E\left(x_{i}\right)=0}$ and ${\displaystyle \textstyle E\left(\left\vert x_{i}\right\vert ^{p}\right)<+\infty }$, ${\displaystyle \textstyle 1\leq p<+\infty }$, then

${\displaystyle A_{p}E\left(\left(\sum _{i=1}^{n}\left\vert x_{i}\right\vert ^{2}\right)_{}^{p/2}\right)\leq E\left(\left\vert \sum _{i=1}^{n}x_{i}\right\vert ^{p}\right)\leq B_{p}E\left(\left(\sum _{i=1}^{n}\left\vert x_{i}\right\vert ^{2}\right)_{}^{p/2}\right)}$

where ${\displaystyle \textstyle A_{p}}$ and ${\displaystyle \textstyle B_{p}}$ are positive constants, which depend only on ${\displaystyle \textstyle p}$.

The second-order case

In the case ${\displaystyle \textstyle p=2}$, the inequality holds with ${\displaystyle \textstyle A_{2}=B_{2}=1}$, and it reduces to the rule for the sum of variances of independent random variables with zero mean, known from elementary statistics: If ${\displaystyle \textstyle E\left(x_{i}\right)=0}$ and ${\displaystyle \textstyle E\left(\left\vert x_{i}\right\vert ^{2}\right)<+\infty }$, then

${\displaystyle \mathrm {Var} \left(\sum _{i=1}^{n}x_{i}\right)=E\left(\left\vert \sum _{i=1}^{n}x_{i}\right\vert ^{2}\right)=\sum _{i=1}^{n}\sum _{j=1}^{n}E\left(x_{i}{\overline {x}}_{j}\right)=\sum _{i=1}^{n}E\left(\left\vert x_{i}\right\vert ^{2}\right)=\sum _{i=1}^{n}\mathrm {Var} \left(x_{i}\right).}$