# Rényi entropy

In information theory, the Rényi entropy generalizes the Hartley entropy, the Shannon entropy, the collision entropy and the min-entropy. Entropies quantify the diversity, uncertainty, or randomness of a system. The entropy is named after Alfréd Rényi.[1] In the context of fractal dimension estimation, the Rényi entropy forms the basis of the concept of generalized dimensions.

The Rényi entropy is important in ecology and statistics as index of diversity. The Rényi entropy is also important in quantum information, where it can be used as a measure of entanglement. In the Heisenberg XY spin chain model, the Rényi entropy as a function of α can be calculated explicitly by virtue of the fact that it is an automorphic function with respect to a particular subgroup of the modular group.[2][3] In theoretical computer science, the min-entropy is used in the context of randomness extractors.

## Definition

The Rényi entropy of order ${\displaystyle \alpha }$, where ${\displaystyle \alpha \geq 0}$ and ${\displaystyle \alpha \neq 1}$, is defined as

${\displaystyle \mathrm {H} _{\alpha }(X)={\frac {1}{1-\alpha }}\log {\Bigg (}\sum _{i=1}^{n}p_{i}^{\alpha }{\Bigg )}}$ .[1]

Here, ${\displaystyle X}$ is a discrete random variable with possible outcomes in the set ${\displaystyle {\mathcal {A}}=\{x_{1},x_{2},...,x_{n}\}}$ and corresponding probabilities ${\displaystyle p_{i}\doteq \Pr(X=x_{i})}$ for ${\displaystyle i=1,\dots ,n}$. The logarithm is conventionally taken to be base 2, especially in the context of information theory where bits are used. If the probabilities are ${\displaystyle p_{i}=1/n}$ for all ${\displaystyle i=1,\dots ,n}$, then all the Rényi entropies of the distribution are equal: ${\displaystyle \mathrm {H} _{\alpha }(X)=\log n}$. In general, for all discrete random variables ${\displaystyle X}$, ${\displaystyle \mathrm {H} _{\alpha }(X)}$ is a non-increasing function in ${\displaystyle \alpha }$.

Applications often exploit the following relation between the Rényi entropy and the p-norm of the vector of probabilities:

${\displaystyle \mathrm {H} _{\alpha }(X)={\frac {\alpha }{1-\alpha }}\log \left(\|P\|_{\alpha }\right)}$ .

Here, the discrete probability distribution ${\displaystyle P=(p_{1},\dots ,p_{n})}$ is interpreted as a vector in ${\displaystyle \mathbb {R} ^{n}}$ with ${\displaystyle p_{i}\geq 0}$ and ${\displaystyle \sum _{i=1}^{n}p_{i}=1}$.

The Rényi entropy for any ${\displaystyle \alpha \geq 0}$ is Schur concave.

## Special cases

Rényi entropy of a random variable with two possible outcomes against p1, where P = (p1, 1 − p1). Shown are H0, H1, H2 and H, in units of shannons.

As α approaches zero, the Rényi entropy increasingly weighs all events with nonzero probability more equally, regardless of their probabilities. In the limit for α → 0, the Rényi entropy is just the logarithm of the size of the support of X. The limit for α → 1 is the Shannon entropy. As α approaches infinity, the Rényi entropy is increasingly determined by the events of highest probability.

### Hartley or max-entropy

Provided the probabilities are nonzero,[4] ${\displaystyle \mathrm {H} _{0}}$ is the logarithm of the cardinality of the alphabet (${\displaystyle {\mathcal {A}}}$) of ${\displaystyle X}$, sometimes called the Hartley entropy of ${\displaystyle X}$,

${\displaystyle \mathrm {H} _{0}(X)=\log n=\log |{\mathcal {A}}|\,}$

### Shannon entropy

The limiting value of ${\displaystyle \mathrm {H} _{\alpha }}$ as α → 1 is the Shannon entropy:[5]

${\displaystyle \mathrm {H} _{1}(X)\equiv \lim _{\alpha \to 1}\mathrm {H} _{\alpha }(X)=-\sum _{i=1}^{n}p_{i}\log p_{i}}$

### Collision entropy

Collision entropy, sometimes just called "Rényi entropy", refers to the case α = 2,

${\displaystyle \mathrm {H} _{2}(X)=-\log \sum _{i=1}^{n}p_{i}^{2}=-\log P(X=Y),}$

where X and Y are independent and identically distributed. The collision entropy is related to the index of coincidence.

### Min-entropy

In the limit as ${\displaystyle \alpha \rightarrow \infty }$, the Rényi entropy ${\displaystyle \mathrm {H} _{\alpha }}$ converges to the min-entropy ${\displaystyle \mathrm {H} _{\infty }}$:

${\displaystyle \mathrm {H} _{\infty }(X)\doteq \min _{i}(-\log p_{i})=-(\max _{i}\log p_{i})=-\log \max _{i}p_{i}\,.}$

Equivalently, the min-entropy ${\displaystyle \mathrm {H} _{\infty }(X)}$ is the largest real number b such that all events occur with probability at most ${\displaystyle 2^{-b}}$.

The name min-entropy stems from the fact that it is the smallest entropy measure in the family of Rényi entropies. In this sense, it is the strongest way to measure the information content of a discrete random variable. In particular, the min-entropy is never larger than the Shannon entropy.

The min-entropy has important applications for randomness extractors in theoretical computer science: Extractors are able to extract randomness from random sources that have a large min-entropy; merely having a large Shannon entropy does not suffice for this task.

## Inequalities between different values of α

That ${\displaystyle \mathrm {H} _{\alpha }}$ is non-increasing in ${\displaystyle \alpha }$ for any given distribution of probabilities ${\displaystyle p_{i}}$, which can be proven by differentiation,[6] as

${\displaystyle -{\frac {d\mathrm {H} _{\alpha }}{d\alpha }}={\frac {1}{(1-\alpha )^{2}}}\sum _{i=1}^{n}z_{i}\log(z_{i}/p_{i}),}$

which is proportional to Kullback–Leibler divergence (which is always non-negative), where ${\displaystyle z_{i}=p_{i}^{\alpha }/\sum _{j=1}^{n}p_{j}^{\alpha }}$.

In particular cases inequalities can be proven also by Jensen's inequality:[7][8]

${\displaystyle \log n=\mathrm {H} _{0}\geq \mathrm {H} _{1}\geq \mathrm {H} _{2}\geq \mathrm {H} _{\infty }.}$

For values of ${\displaystyle \alpha >1}$, inequalities in the other direction also hold. In particular, we have[9][citation needed]

${\displaystyle \mathrm {H} _{2}\leq 2\mathrm {H} _{\infty }.}$

On the other hand, the Shannon entropy ${\displaystyle \mathrm {H} _{1}}$ can be arbitrarily high for a random variable ${\displaystyle X}$ that has a given min-entropy.[citation needed]

## Rényi divergence

As well as the absolute Rényi entropies, Rényi also defined a spectrum of divergence measures generalising the Kullback–Leibler divergence.[10]

The Rényi divergence of order α or alpha-divergence of a distribution P from a distribution Q is defined to be

${\displaystyle D_{\alpha }(P\|Q)={\frac {1}{\alpha -1}}\log {\Bigg (}\sum _{i=1}^{n}{\frac {p_{i}^{\alpha }}{q_{i}^{\alpha -1}}}{\Bigg )}\,}$

when 0 < α < ∞ and α ≠ 1. We can define the Rényi divergence for the special values α = 0, 1, ∞ by taking a limit, and in particular the limit α → 1 gives the Kullback–Leibler divergence.

Some special cases:

${\displaystyle D_{0}(P\|Q)=-\log Q(\{i:p_{i}>0\})}$ : minus the log probability under Q that pi > 0;
${\displaystyle D_{1/2}(P\|Q)=-2\log \sum _{i=1}^{n}{\sqrt {p_{i}q_{i}}}}$ : minus twice the logarithm of the Bhattacharyya coefficient; (Nielsen & Boltz (2010))
${\displaystyle D_{1}(P\|Q)=\sum _{i=1}^{n}p_{i}\log {\frac {p_{i}}{q_{i}}}}$ : the Kullback–Leibler divergence;
${\displaystyle D_{2}(P\|Q)=\log {\Big \langle }{\frac {p_{i}}{q_{i}}}{\Big \rangle }}$ : the log of the expected ratio of the probabilities;
${\displaystyle D_{\infty }(P\|Q)=\log \sup _{i}{\frac {p_{i}}{q_{i}}}}$ : the log of the maximum ratio of the probabilities.

The Rényi divergence is indeed a divergence, meaning simply that ${\displaystyle D_{\alpha }(P\|Q)}$ is greater than or equal to zero, and zero only when P = Q. For any fixed distributions P and Q, the Rényi divergence is nondecreasing as a function of its order α, and it is continuous on the set of α for which it is finite.[10]

## Financial interpretation

A pair of probability distributions can be viewed as a game of chance in which one of the distributions defines official odds and the other contains the actual probabilities. Knowledge of the actual probabilities allows a player to profit from the game. The expected profit rate is connected to the Rényi divergence as follows[11]

${\displaystyle {\rm {ExpectedRate}}={\frac {1}{R}}\,D_{1}(b\|m)+{\frac {R-1}{R}}\,D_{1/R}(b\|m)\,,}$

where ${\displaystyle m}$ is the distribution defining the official odds (i.e. the "market") for the game, ${\displaystyle b}$ is the investor-believed distribution and ${\displaystyle R}$ is the investor's risk aversion (the Arrow-Pratt relative risk aversion).

If the true distribution is ${\displaystyle p}$ (not necessarily coinciding with the investor's belief ${\displaystyle b}$), the long-term realized rate converges to the true expectation which has a similar mathematical structure[12]

${\displaystyle {\rm {RealizedRate}}={\frac {1}{R}}\,{\Big (}D_{1}(p\|m)-D_{1}(p\|b){\Big )}+{\frac {R-1}{R}}\,D_{1/R}(b\|m)\,.}$

## Why α = 1 is special

The value α = 1, which gives the Shannon entropy and the Kullback–Leibler divergence, is special because it is only at α = 1 that the chain rule of conditional probability holds exactly:

${\displaystyle \mathrm {H} (A,X)=\mathrm {H} (A)+\mathbb {E} _{a\sim A}{\big [}\mathrm {H} (X|A=a){\big ]}}$

for the absolute entropies, and

${\displaystyle D_{\mathrm {KL} }(p(x|a)p(a)\|m(x,a))=D_{\mathrm {KL} }(p(a)\|m(a))+\mathbb {E} _{p(a)}\{D_{\mathrm {KL} }(p(x|a)\|m(x|a))\},}$

for the relative entropies.

The latter in particular means that if we seek a distribution p(x, a) which minimizes the divergence from some underlying prior measure m(x, a), and we acquire new information which only affects the distribution of a, then the distribution of p(x|a) remains m(x|a), unchanged.

The other Rényi divergences satisfy the criteria of being positive and continuous; being invariant under 1-to-1 co-ordinate transformations; and of combining additively when A and X are independent, so that if p(A, X) = p(A)p(X), then

${\displaystyle \mathrm {H} _{\alpha }(A,X)=\mathrm {H} _{\alpha }(A)+\mathrm {H} _{\alpha }(X)\;}$

and

${\displaystyle D_{\alpha }(P(A)P(X)\|Q(A)Q(X))=D_{\alpha }(P(A)\|Q(A))+D_{\alpha }(P(X)\|Q(X)).}$

The stronger properties of the α = 1 quantities, which allow the definition of conditional information and mutual information from communication theory, may be very important in other applications, or entirely unimportant, depending on those applications' requirements.

## Exponential families

The Rényi entropies and divergences for an exponential family admit simple expressions[13]

${\displaystyle \mathrm {H} _{\alpha }(p_{F}(x;\theta ))={\frac {1}{1-\alpha }}\left(F(\alpha \theta )-\alpha F(\theta )+\log E_{p}[e^{(\alpha -1)k(x)}]\right)}$

and

${\displaystyle D_{\alpha }(p:q)={\frac {J_{F,\alpha }(\theta :\theta ')}{1-\alpha }}}$

where

${\displaystyle J_{F,\alpha }(\theta :\theta ')=\alpha F(\theta )+(1-\alpha )F(\theta ')-F(\alpha \theta +(1-\alpha )\theta ')}$

is a Jensen difference divergence.

## Physical meaning

The Rényi entropy in quantum physics is not considered to be an observable, due to its nonlinear dependence on the density matrix. (This nonlinear dependence applies even in the special case of the Shannon entropy.) It can, however, be given an operational meaning through the two-time measurements (also known as full counting statistics) of energy transfers.

The limit of the Rényi entropy as ${\displaystyle \alpha \to 1}$ is the von Neumann entropy.

## Notes

1. ^ a b Rényi (1961)
2. ^ Franchini, Its & Korepin (2008)
3. ^ Its & Korepin (2010)
4. ^ RFC 4086, page 6
5. ^ Bromiley, Thacker & Bouhova-Thacker (2004)
6. ^ Beck & Schlögl (1993)
7. ^ ${\displaystyle \mathrm {H} _{1}\geq \mathrm {H} _{2}}$ holds because ${\displaystyle \sum \limits _{i=1}^{M}{p_{i}\log p_{i}}\leq \log \sum \limits _{i=1}^{M}{p_{i}^{2}}}$.
8. ^ ${\displaystyle \mathrm {H} _{\infty }\leq \mathrm {H} _{2}}$ holds because ${\displaystyle \log \sum \limits _{i=1}^{n}{p_{i}^{2}}\leq \log \sup _{i}p_{i}\left({\sum \limits _{i=1}^{n}{p_{i}}}\right)=\log \sup p_{i}}$.
9. ^ ${\displaystyle \mathrm {H} _{2}\leq 2\mathrm {H} _{\infty }}$ holds because ${\displaystyle \log \sum \limits _{i=1}^{n}{p_{i}^{2}}\geq \log \sup _{i}p_{i}^{2}=2\log \sup _{i}p_{i}}$
10. ^ a b Van Erven, Tim; Harremoës, Peter (2014). "Rényi Divergence and Kullback–Leibler Divergence". IEEE Transactions on Information Theory. 60 (7): 3797–3820. arXiv:1206.2459. doi:10.1109/TIT.2014.2320500. S2CID 17522805.
11. ^ Soklakov (2018)
12. ^ Soklakov (2018)
13. ^ Nielsen & Nock (2011)