This is an old revision of this page, as edited by Dmoews(talk | contribs) at 16:18, 20 December 2023(collapse identical references). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Revision as of 16:18, 20 December 2023 by Dmoews(talk | contribs)(collapse identical references)
In probability theory, a sub-Gaussian distribution, the distribution of a sub-Gaussian random variable, is a probability distribution with strong tail decay. More specifically, the tails of a sub-Gaussian distribution are dominated by (i.e. decay at least as fast as) the tails of a Gaussian. This property gives sub-Gaussian distributions their name.
Formally, the probability distribution of a random variable is called sub-Gaussian if there is a positive constantC such that for every ,
.
Alternatively, a random variable is considered sub-Gaussian if its distribution function is upper bounded (up to a constant) by the distribution function of a Gaussian. Specifically, we say that is sub-Gaussian if for all we have that:
where is constant and is a mean zero Gaussian random variable.[1]: Theorem 2.6
Definitions
The sub-Gaussian norm of , denoted as , is defined bywhich is the Orlicz norm of generated by the Orlicz function By condition below, sub-Gaussian random variables can be characterized as those random variables with finite sub-Gaussian norm.
Sub-Gaussian properties
Let be a random variable. The following conditions are equivalent:
Union bound condition: for some c > 0, for all n > c, where are i.i.d copies of X.
Examples
A standard normal random variable is a sub-Gaussian random variable.
Let be a random variable with symmetric Bernoulli distribution (or Rademacher distribution). That is, takes values and with probabilities each. Since , it follows that and hence is a sub-Gaussian random variable.
Maximum of Sub-Gaussian Random Variables
Consider a finite collection of subgaussian random variables, X1, ..., Xn, with corresponding subgaussian parameters . The random variable Mn = max(X1, ..., Xn) represents the maximum of this collection. The expectation can be bounded above by . Note that no independence assumption is needed to form this bound.[1]
^Vershynin, R. (2018). High-dimensional probability: An introduction with applications in data science. Cambridge: Cambridge University Press. pp. 33–34.
Rudelson, Mark; Vershynin, Roman (2010). "Non-asymptotic theory of random matrices: extreme singular values". Proceedings of the International Congress of Mathematicians 2010. pp. 1576–1602. arXiv:1003.2990. doi:10.1142/9789814324359_0111.
Zajkowskim, K. (2020). "On norms in some class of exponential type Orlicz spaces of random variables". Positivity. An International Mathematics Journal Devoted to Theory and Applications of Positivity.24(5): 1231--1240. arXiv:1709.02970. doi.org/10.1007/s11117-019-00729-6.