# Seven states of randomness

Stochastic process with random increments from a symmetric stable distribution with α = 1.7. Notice the discontinuous changes.
Stochastic process with random increments from a standard normal distribution.

The seven states of randomness in probability theory, fractals and risk analysis are extensions of the concept of randomness as modeled by the normal distribution. These seven states were first introduced by Benoît Mandelbrot in his 1997 book Fractals and Scaling in Finance, which applied fractal analysis to the study of risk and randomness.[1] This classification builds upon the three main states of randomness: mild, slow, and wild.

The importance of seven states of randomness classification for mathematical finance is that methods such as Markowitz mean variance portfolio and Black–Scholes model may be invalidated as the tails of the distribution of returns are fattened: the former relies on finite standard deviation (volatility) and stability of correlation, while the latter is constructed upon Brownian motion.

## History

These seven states build on earlier work of Mandelbrot in 1963: "The variations of certain speculative prices"[2] and "New methods in statistical economics"[3] in which he argued that most statistical models approached only a first stage of dealing with indeterminism in science, and that they ignored many aspects of real world turbulence, in particular, most cases of financial modelling.[4][5] This was then presented by Mandelbrot in the International Congress for Logic (1964) in an address titled "The Epistemology of Chance in Certain Newer Sciences"[6]

Intuitively speaking, Mandelbrot argued[6] that the traditional normal distribution does not properly capture empirical and "real world" distributions and there are other forms of randomness that can be used to model extreme changes in risk and randomness. He observed that randomness can become quite "wild" if the requirements regarding finite mean and variance are abandoned. Wild randomness corresponds to situations in which a single observation, or a particular outcome can impact the total in a very disproportionate way.

Random draws from an exponential distribution with mean = 1. (Borderline mild randomness)
Random draws from a lognormal distribution with mean = 1. (Slow randomness with finite and localized moments)
Random draws from a Pareto distribution with mean = 1 and α = 1.5 (Wild randomness)

The classification was formally introduced in his 1997 book Fractals and Scaling in Finance,[1] as a way to bring insight into the three main states of randomness: mild, slow, and wild . Given N addends, portioning concerns the relative contribution of the addends to their sum. By even portioning, Mandelbrot meant that the addends were of same order of magnitude, otherwise he considered the portioning to be concentrated. Given the moment of order q of a random variable, Mandelbrot called the root of degree q of such moment the scale factor (of order q).

The seven states are:

1. Proper mild randomness: short-run portioning is even for N = 2, e.g. the normal distribution
2. Borderline mild randomness: short-run portioning is concentrated for N = 2, but eventually becomes even as N grows, e.g. the exponential distribution with rate λ = 1 (and so with expected value 1/λ = 1)
3. Slow randomness with finite delocalized moments: scale factor increases faster than q but no faster than ${\displaystyle {\sqrt[{w}]{q}}}$, w < 1
4. Slow randomness with finite and localized moments: scale factor increases faster than any power of q, but remains finite, e.g. the lognormal distribution
5. Pre-wild randomness: scale factor becomes infinite for q > 2, e.g. the Pareto distribution with α = 2.5
6. Wild randomness: infinite second moment, but finite moment of some positive order, e.g. the Pareto distribution with α = 1.5
7. Extreme randomness: all moments are infinite, e.g. the Pareto distribution with ${\displaystyle \alpha \leq 1}$

Wild randomness has applications outside financial markets, e.g. it has been used in the analysis of turbulent situations such as wild forest fires.[7]

Using elements of this distinction, in March 2006, a year before the Financial crisis of 2007–2010, and four years before the Flash crash of May 2010, during which the Dow Jones Industrial Average had a 1,000 point intraday swing within minutes,[8] Mandelbrot and Nassim Taleb published an article in the Financial Times arguing that the traditional "bell curves" that have been in use for over a century are inadequate for measuring risk in financial markets, given that such curves disregard the possibility of sharp jumps or discontinuities. Contrasting this approach with the traditional approaches based on random walks, they stated:[9]

We live in a world primarily driven by random jumps, and tools designed for random walks address the wrong problem.

Mandelbrot and Taleb pointed out that although one can assume that the odds of finding a person who is several miles tall are extremely low, similar excessive observations can not be excluded in other areas of application. They argued that while traditional bell curves may provide a satisfactory representation of height and weight in the population, they do not provide a suitable modeling mechanism for market risks or returns, where just ten trading days represent 63 per cent of the returns of the past 50 years.

## Definitions

### Doubling convolution

If the probability density of ${\displaystyle U=U'+U''}$ is denoted ${\displaystyle p_{2}(u)}$, then it can be obtained by the double convolution ${\displaystyle p_{2}(x)=\int p(u)p(x-u)du}$.

### Short run portioning ratio

When u is known, the conditional probability density of u' is given by the portioning ratio:

${\displaystyle {\frac {p(u')p(u-u')}{p_{2}(u)}}}$

### Concentration in mode

In many important cases, the maximum of p(u')p(u-u') occurs near u'=u/2, or near u'=0 and u'=u. Take the logarithm of p(u')p(u-u') and write:

${\displaystyle \Delta (u)=2\log p(u/2)-[\log p(0)+\log p(u)]}$

• If log p(u) is cap-convex, the portioning ratio is maximum for u'=u/2
• If log p(u) is straight, the portioning ratio is a constant
• If log p(u) is cup-convex, the portioning ratio is minimum for u'=u/2

### Concentration in probability

Splitting the doubling convolution in 3 parts gives:

${\displaystyle p_{2}(x)=\int _{0}^{x}p(u)p(x-u)du=\left\{\int _{0}^{\tilde {x}}+\int _{\tilde {x}}^{x-{\tilde {x}}}+\int _{x-{\tilde {x}}}^{x}\right\}p(u)p(x-u)du=I_{L}+I_{0}+I_{R}}$

p(u) is short-run concentrated in probability if it is possible to select ${\displaystyle {\tilde {u}}(u)}$ so that the middle interval of (${\displaystyle {\tilde {u}},u-{\tilde {u}}}$) has the following two properties as u→∞:

• I0/p2(u) → 0
• ${\displaystyle (u-2{\tilde {u}})u}$ does not → 0

### Localized and delocalized moments

Consider the formula ${\displaystyle \operatorname {E} [U^{q}]=\int _{0}^{\infty }u^{q}p(u)du}$, if p(u) is the scaling distribution the integrand is maximum at 0 and ∞, on other cases the integrand may have a sharp global maximum for some value ${\displaystyle {\tilde {u}}_{q}}$ defined by the following equation:

${\displaystyle 0={\frac {d}{du}}(q\log u+\log p(u))={\frac {q}{u}}-|{\frac {d\log p(u)}{du}}|}$

One must also know ${\displaystyle u^{q}p(u)}$ in the neighborhood of ${\displaystyle {\tilde {u}}_{q}}$. The function ${\displaystyle u^{q}p(u)}$ often admits a "Gaussian" approximation given by:

${\displaystyle \log[u^{q}p(u)]=\log p(u)+qu=constant-(u-{\tilde {u}}_{q})^{2}{\tilde {\sigma }}_{q}^{-2/2}}$

When ${\displaystyle u^{q}p(u)}$ is well-approximated by a Gaussian density, the bulk of ${\displaystyle \operatorname {E} [U^{q}]}$ originates in the "q-interval" defined as ${\displaystyle [{\tilde {u}}_{q}-{\tilde {\sigma }}_{q},{\tilde {u}}_{q}+{\tilde {\sigma }}_{q}]}$. The Gaussian q-intervals greatly overlap for all values of ${\displaystyle \sigma }$. The Gaussian moments are called delocalized. The lognormal's q-intervals are uniformly spaced and their width is independent of q; therefore if the log-normal is sufficiently skew, the q-interval and (q+1)-interval do not overlap. The lognormal moments are called uniformly localized. In other cases, neighboring q-intervals cease to overlap for sufficiently high q, such moments are called asymptotically localized.

## The seven states of randomness

• Proper mild randomness: Short-run portioning is even for N=2.
• Borderline mild randomness: Short-run portioning is concentrated for N=2, but becomes even when N exceeds some finite threshold.
• Slow randomness with finite and delocalized moments: loosely characterized by either ${\displaystyle P^{-1}}$ increasing faster than ${\displaystyle |\log x|}$ but no faster than ${\displaystyle |\log x|^{1/w}}$, with w<1 or by ${\displaystyle \operatorname {[} {E}U^{q}]^{1/q}}$ increasing faster than q but no faster than a power of ${\displaystyle q^{1/w}}$.
• Slow randomness with finite and localized moments: loosely characterized by either ${\displaystyle P^{-1}}$ increasing faster than any power ${\displaystyle |\log x|^{1/2}}$ but less rapidly than any function of the form ${\displaystyle e^{|\log x|^{\gamma }}}$ with γ<1, or by ${\displaystyle \operatorname {[} {E}U^{q}]^{1/q}}$ increasing faster than any power of q, but remaining finite.
• Pre-wild randomness: loosely characterized by ${\displaystyle P^{-1}}$ increasing more rapidly than any functions of the form ${\displaystyle e^{|\log x|^{\gamma }}}$ with γ<1 but less rapidly than ${\displaystyle x^{-1/2}}$ or by ${\displaystyle \operatorname {[} {E}U^{q}]^{1/q}}$ being infinite when q≥α≥2.
• Wild randomness: characterized by ${\displaystyle \operatorname {[} {E}U]^{2}=\infty }$, but ${\displaystyle \operatorname {[} {E}U]^{q}<\infty }$ for some q>0, however small.
• Extreme randomness: characterized by ${\displaystyle \operatorname {[} {E}U]^{q}=\infty }$ for all q>0.