# Information dimension

In information theory, information dimension is an information measure for random vectors in Euclidean space, based on the normalized entropy of finely quantized versions of the random vectors. This concept was first introduced by Alfréd Rényi in 1959.[1]

Simply speaking, it is a measure of the fractal dimension of a probability distribution. It characterizes the growth rate of the Shannon entropy given by successively finer discretizations of the space.

In 2010, Wu and Verdú gave an operational characterization of Rényi information dimension as the fundamental limit of almost lossless data compression for analog sources under various regularity constraints of the encoder/decoder.

## Definition and Properties

The entropy of a discrete random variable ${\displaystyle Z}$ is

${\displaystyle \mathbb {H} _{0}(Z)=\sum _{z\in supp(P_{Z})}P_{Z}(z)\log _{2}{\frac {1}{P_{Z}(z)}}}$

where ${\displaystyle P_{Z}(z)}$ is the probability measure of ${\displaystyle Z}$ when ${\displaystyle Z=z}$, and the ${\displaystyle supp(P_{Z})}$ denotes a set ${\displaystyle \{z|z\in {\mathcal {Z}},P_{Z}(z)>0\}}$.

Let ${\displaystyle X}$ be an arbitrary real-valued random variable. Given a positive integer ${\displaystyle m}$, we create a new discrete random variable

${\displaystyle \langle X\rangle _{m}={\frac {\lfloor mX\rfloor }{m}}}$

where the ${\displaystyle \lfloor \cdot \rfloor }$ is the floor operator which converts a real number to the greatest integer less than it. Then

${\displaystyle {\underline {d}}(X)=\liminf _{m\rightarrow \infty }{\frac {\mathbb {H} _{0}(\langle X\rangle _{m})}{\log _{2}m}}}$

and

${\displaystyle {\bar {d}}(X)=\limsup _{m\rightarrow \infty }{\frac {\mathbb {H} _{0}(\langle X\rangle _{m})}{\log _{2}m}}}$

are called lower and upper information dimensions of ${\displaystyle X}$ respectively. When ${\displaystyle {\underline {d}}(X)={\bar {d}}(X)}$, we call this value information dimension of ${\displaystyle X}$,

${\displaystyle d(X)=\lim _{m\rightarrow \infty }{\frac {\mathbb {H} _{0}(\langle X\rangle _{m})}{\log _{2}m}}}$

Some important properties of information dimension ${\displaystyle d(X)}$:

• If the mild condition ${\displaystyle \mathbb {H} (\lfloor X\rfloor )<\infty }$ is fulfilled, we have ${\displaystyle 0\leq {\underline {d}}(X)\leq {\bar {d}}(X)\leq 1}$.
• For an ${\displaystyle n}$-dimensional random vector ${\displaystyle {\vec {X}}}$, the first property can be generalized to ${\displaystyle 0\leq {\underline {d}}({\vec {X}})\leq {\bar {d}}({\vec {X}})\leq n}$.
• It is sufficient to calculate the upper and lower information dimensions when restricting to the exponential subsequence ${\displaystyle m=2^{l}}$.
• ${\displaystyle {\underline {d}}(X)}$ and ${\displaystyle {\bar {d}}(X)}$ are kept unchanged if rounding or ceiling functions are used in quantization.

## ${\displaystyle d}$-Dimensional Entropy

If the information dimension ${\displaystyle d}$ exists, one can define the ${\displaystyle d}$-dimensional entropy of this distribution by

${\displaystyle \mathbb {H} _{d(X)}(X)=\lim _{n\rightarrow +\infty }(\mathbb {H} _{0}(\langle X\rangle _{n})-d(X)\log _{2}n)}$

provided the limit exists. If ${\displaystyle d=0}$, the zero-dimensional entropy equals the standard Shannon entropy ${\displaystyle \mathbb {H} _{0}(X)}$. For integer dimension ${\displaystyle d=n\geq 1}$, the ${\displaystyle n}$-dimensional entropy is the ${\displaystyle n}$-fold integral defining the respective differential entropy.

## Discrete-Continuous Mixture Distributions

According to Lebesgue decomposition theorem,[2] a probability distribution can be uniquely represented by the mixture

${\displaystyle v=pP_{Xd}+qP_{Xc}+rP_{Xs}}$

where ${\displaystyle p+q+r=1}$ and ${\displaystyle p,q,r\geq 0}$; ${\displaystyle P_{Xd}}$ is a purely atomic probability measure (discrete part), ${\displaystyle P_{Xc}}$ is the absolutely continuous probability measure, and ${\displaystyle P_{Xs}}$ is a probability measure singular with respect to Lebesgue measure but with no atoms (singular part). Let ${\displaystyle X}$ be a random variable such that ${\displaystyle \mathbb {H} (\lfloor X\rfloor )<\infty }$. Assume the distribution of ${\displaystyle X}$ can be represented as

${\displaystyle v=(1-\rho )P_{Xd}+\rho P_{Xc}}$

where ${\displaystyle P_{Xd}}$ is a discrete measure and ${\displaystyle P_{Xc}}$ is the absolutely continuous probability measure with ${\displaystyle 0\leq \rho \leq 1}$. Then

${\displaystyle d(X)=\rho }$

Moreover, given ${\displaystyle \mathbb {H} _{0}(P_{Xd})}$ and differential entropy ${\displaystyle h(P_{Xc})}$, the ${\displaystyle d}$-Dimensional Entropy is simply given by

${\displaystyle \mathbb {H} _{\rho }(X)=(1-\rho )\mathbb {H} _{0}(P_{Xd})+\rho h(P_{Xc})+\mathbb {H} _{0}(\rho )}$

where ${\displaystyle \mathbb {H} _{0}(\rho )}$ is the Shannon entropy of a discrete random variable ${\displaystyle Z}$ with ${\displaystyle P_{Z}(1)=\rho }$ and ${\displaystyle P_{Z}(0)=1-\rho }$ and given by

${\displaystyle \mathbb {H} _{0}(\rho )=\rho \log _{2}{\frac {1}{\rho }}+(1-\rho )\log _{2}{\frac {1}{1-\rho }}}$

### Example

Consider a signal which has a Gaussian probability distribution.

We pass the signal through a half-wave rectifier which converts all negative value to 0, and maintains all other values. The half-wave rectifier can be characterized by the function

${\displaystyle f(x)={\begin{cases}x,&{\text{if }}x\geq 0\\0,&x<0\end{cases}}}$

Then, at the output of the rectifier, the signal has a rectified Gaussian distribution. It is characterized by an atomic mass of weight 0.5 and has a Gaussian PDF for all ${\displaystyle x>0}$.

With this mixture distribution, we apply the formula above and get the information dimension ${\displaystyle d}$ of the distribution and calculate the ${\displaystyle d}$-dimensional entropy.

${\displaystyle d(X)=\rho =0.5}$

The normalized right part of the zero-mean Gaussian distribution has entropy ${\displaystyle h(P_{Xc})={\frac {1}{2}}\log _{2}(2\pi e\sigma ^{2})-1}$, hence

{\displaystyle {\begin{aligned}\mathbb {H} _{0.5}(X)&=(1-0.5)(1\log _{2}1)+0.5h(P_{Xc})+\mathbb {H} _{0}(0.5)\\&=0+{\frac {1}{2}}({\frac {1}{2}}\log _{2}(2\pi e\sigma ^{2})-1)+1\\&={\frac {1}{4}}\log _{2}(2\pi e\sigma ^{2})+{\frac {1}{2}}\,{\text{ bit(s)}}\end{aligned}}}

## Connection to Differential Entropy

It is shown [3] that information dimension and differential entropy are tightly connected.

Let ${\displaystyle X}$ be a positive random variable with density ${\displaystyle f(x)}$.

Suppose we divide the range of ${\displaystyle X}$ into bins of length ${\displaystyle \Delta }$. By the mean value theorem, there exists a value ${\displaystyle x_{i}}$ within each bin such that

${\displaystyle f(x_{i})\Delta =\int _{i\Delta }^{(i+1)\Delta }f(x)\;\mathrm {d} x}$

Consider the discretized random variable ${\displaystyle X^{\Delta }=x_{i}}$ if ${\displaystyle i\Delta \leq X<(i+1)\Delta }$.

The probability of each support point ${\displaystyle X^{\Delta }=x_{i}}$ is

${\displaystyle P_{X^{\Delta }}(x_{i})=\int _{i\Delta }^{(i+1)\Delta }f(x)\;\mathrm {d} x=f(x_{i})\Delta }$

The entropy of this variable is

{\displaystyle {\begin{aligned}\mathbb {H} _{0}(X^{\Delta })&=-\sum _{x_{i}\in supp(P_{X^{\Delta }})}P_{X^{\Delta }}\log _{2}P_{X^{\Delta }}\\&=-\sum _{x_{i}\in supp(P_{X^{\Delta }})}f(x_{i})\Delta \log _{2}(f(x_{i})\Delta )\\&=\sum _{x_{i}\in supp(P_{X^{\Delta }})}\Delta f(x_{i})\log _{2}f(x_{i})-\sum _{x_{i}\in supp(P_{X^{\Delta }})}f(x_{i})\Delta \log _{2}\Delta \\&=\sum _{x_{i}\in supp(P_{X^{\Delta }})}\Delta f(x_{i})\log _{2}f(x_{i})-\log _{2}\Delta \\\end{aligned}}}

If we set ${\displaystyle \Delta =1/m}$ and ${\displaystyle x_{i}=i/m}$ then we are doing exactly the same quantization as the definition of information dimension. Since relabeling the events of a discrete random variable does not change its entropy, we have

${\displaystyle \mathbb {H} _{0}(X^{1/m})=\mathbb {H} _{0}(\langle X\rangle _{m}).}$

This yields

${\displaystyle \mathbb {H} _{0}(\langle X\rangle _{m})=-\sum {\frac {1}{m}}f(x_{i})\log _{2}f(x_{i})+\log _{2}m}$

and when ${\displaystyle m}$ is sufficient large,

${\displaystyle -\sum \Delta f(x_{i})\log _{2}f(x_{i})\approx \int f(x)\log _{2}{\frac {1}{f(x)}}\mathrm {d} x}$

which is the differential entropy ${\displaystyle h(x)}$ of the continuous random variable. In particular, if ${\displaystyle f(x)}$ is Riemann integrable, then

${\displaystyle h(X)=\lim _{m\rightarrow \infty }\mathbb {H} _{0}(\langle X\rangle _{m})-\log _{2}(m).}$

Comparing this with the ${\displaystyle d}$-dimensional entropy shows that the differential entropy is exactly the one-dimensional entropy

${\displaystyle h(X)=\mathbb {H} _{1}(X).}$

In fact, this can be generalized to higher dimensions. Rényi shows that, if ${\displaystyle {\vec {X}}}$ is a random vector in a ${\displaystyle n}$-dimensional Euclidean space ${\displaystyle \Re ^{n}}$ with an absolutely continuous distribution with a probability density function ${\displaystyle f_{\vec {X}}({\vec {x}})}$ and finite entropy of the integer part (${\displaystyle H_{0}(\langle {\vec {X}}\rangle _{m})<\infty }$), we have ${\displaystyle d({\vec {X}})=n}$

and

${\displaystyle \mathbb {H} _{n}({\vec {X}})=\int \cdots \int f_{\vec {X}}({\vec {x}})\log _{2}{\frac {1}{f_{\vec {X}}({\vec {x}})}}\mathrm {d} {\vec {x}},}$

if the integral exist.

## Lossless data compression

The information dimension of a distribution gives a theoretical upper bound on the compression rate, if one wants to compress a variable coming from this distribution. In the context of lossless data compression, we try to compress real number with less real number which both have infinite precision.

The main objective of the lossless data compression is to find efficient representations for source realizations ${\displaystyle x^{n}\in {\mathcal {X}}^{n}}$ by ${\displaystyle y^{n}\in {\mathcal {Y}}^{n}}$. A ${\displaystyle (n,k)-}$code for ${\displaystyle \{X_{i}:i\in {\mathcal {N}}\}}$ is a pair of mappings:

• encoder: ${\displaystyle f_{n}:{\mathcal {X}}^{n}\rightarrow {\mathcal {Y}}^{k}}$ which converts information from a source into symbols for communication or storage;
• decoder: ${\displaystyle g_{n}:{\mathcal {Y}}^{k}\rightarrow {\mathcal {X}}^{n}}$ is the reverse process, converting code symbols back into a form that the recipient understands.

The block error probability is ${\displaystyle {\mathcal {P}}\{g_{n}(f_{n}(X^{n}))\neq X^{n}\}}$.

Define ${\displaystyle r(\epsilon )}$ to be the infimum of ${\displaystyle r\geq 0}$ such that there exists a sequence of ${\displaystyle (n,\lfloor rn\rfloor )-}$codes such that ${\displaystyle {\mathcal {P}}\{g_{n}(f_{n}(X^{n}))\neq X^{n}\}\leq \epsilon }$ for all sufficiently large ${\displaystyle n}$.

So ${\displaystyle r(\epsilon )}$ basically gives the ratio between the code length and the source length, it shows how good a specific encoder decoder pair is. The fundamental limits in lossless source coding are as follows.[4]

Consider a continuous encoder function ${\displaystyle f(x):\Re ^{n}\rightarrow \Re ^{\lfloor Rn\rfloor }}$ with its continuous decoder function ${\displaystyle g(x):\Re ^{\lfloor Rn\rfloor }\rightarrow \Re ^{n}}$. If we impose no regularity on ${\displaystyle f(x)}$ and ${\displaystyle g(x)}$, due to the rich structure of ${\displaystyle \Re }$, we have the minimum ${\displaystyle \epsilon }$-achievable rate ${\displaystyle R_{0}(\epsilon )=0}$ for all ${\displaystyle 0<\epsilon \leq 1}$. It means that one can build an encoder-decoder pair with infinity compression rate.

In order to get some nontrivial and meaningful conclusions, let ${\displaystyle R^{*}(\epsilon )}$ the minimum ${\displaystyle \epsilon -}$achievable rate for linear encoder and Borel decoder. If random variable ${\displaystyle X}$ has a distribution which is a mixture of discrete and continuous part. Then ${\displaystyle R^{*}(\epsilon )=d(X)}$ for all ${\displaystyle 0<\epsilon \leq 1}$ Suppose we restrict the decoder to be a Lipschitz continuous function and ${\displaystyle {\bar {d}}(X)<\infty }$ holds, then the minimum ${\displaystyle \epsilon -}$achievable rate ${\displaystyle R(\epsilon )\geq {\bar {d}}(X)}$ for all ${\displaystyle 0<\epsilon \leq 1}$.