Big O in probability notation
The order in probability notation is used in probability theory and statistical theory in direct parallel to the big-O notation that is standard in mathematics. Where the big-O notation deals with the convergence of sequences or sets of ordinary numbers, the order in probability notation deals with convergence of sets of random variables, where convergence is in the sense of convergence in probability.
For a set of random variables Xn and a corresponding set of constants an (both indexed by n, which need not be discrete), the notation
means that the set of values Xn/an converges to zero in probability as n approaches an appropriate limit. Equivalently, Xn = op(an) can be written as Xn/an = op(1), where Xn = op(1) is defined as,
for every positive ε.
means that the set of values Xn/an is stochastically bounded. That is, for any ε > 0, there exists a finite M > 0 such that,
If is a stochastic sequence such that each element has finite variance, then
(see Theorem 14.4-1 in Bishop et al.)
If, moreover, is a null sequence for a sequence of real numbers, then converges to zero in probability by Chebyshev's inequality, so