Compositional data

In statistics, compositional data are quantitative descriptions of the parts of some whole, conveying relative information. Mathematically, compositional data is represented by points on a simplex. Measurements involving probabilities, proportions, percentages or ppm can all be thought of as compositional data.

Ternary plot

In three variables, compositional data in three variables can be plotted via ternary plots. The use of a barycentric plot on three variables graphically depicts the ratios of the three variables as positions in an equilateral triangle.

Simplicial sample space

In general, John Aitchison defined compositional data to be proportions of some whole in 1982. In particular, a compositional data point (or composition for short) can be represented by a positive real vector. The sample space of compositional data is a simplex:

${\mathcal {S}}^{D}=\left\{\mathbf {x} =[x_{1},x_{2},\dots ,x_{D}]\in \mathbb {R} ^{D}\,\left|\,x_{i}>0,i=1,2,\dots ,D;\sum _{i=1}^{D}x_{i}=\kappa \right.\right\}.\$  An illustration of the Aitchison simplex. Here, there are 3 parts, $x_{1},x_{2},x_{3}$ represent values of different proportions. A, B, C, D and E are 5 different compositions within the simplex. A, B and C are all equivalent and D and E are equivalent.

The only information is given by the ratios between components, so the information of a composition is preserved under multiplication by any positive constant. Therefore the sample space of compositional data can always be assumed to be a standard simplex, i.e. $\kappa =1$ . In this context, normalization to the standard simplex is called closure and is denoted by ${\mathcal {C}}[\,\cdot \,]$ :

${\mathcal {C}}[x_{1},x_{2},\dots ,x_{D}]=\left[{\frac {x_{1}}{\sum _{i=1}^{D}x_{i}}},{\frac {x_{2}}{\sum _{i=1}^{D}x_{i}}},\dots ,{\frac {x_{D}}{\sum _{i=1}^{D}x_{i}}}\right],\$ where D is the number of parts (components) and $[\cdot ]$ denotes a row vector.

Aitchison geometry

The simplex can be given the structure of a real vector space in several different ways. The following vector space structure is called Aitchison geometry or the Aitchison simplex and has the following operations:

Perturbation
$x\oplus y=\left[{\frac {x_{1}y_{1}}{\sum _{i=1}^{D}x_{i}y_{i}}},{\frac {x_{2}y_{2}}{\sum _{i=1}^{D}x_{i}y_{i}}},\dots ,{\frac {x_{D}y_{D}}{\sum _{i=1}^{D}x_{i}y_{i}}}\right]=C[x_{1}y_{1},\ldots ,x_{D}y_{D}]\qquad \forall x,y\in S^{D}$ Powering
$\alpha \odot x=\left[{\frac {x_{1}^{\alpha }}{\sum _{i=1}^{D}x_{i}^{\alpha }}},{\frac {x_{2}^{\alpha }}{\sum _{i=1}^{D}x_{i}^{\alpha }}},\ldots ,{\frac {x_{D}^{\alpha }}{\sum _{i=1}^{D}x_{i}^{\alpha }}}\right]=C[x_{1}^{\alpha },\ldots ,x_{D}^{\alpha }]\qquad \forall x\in S^{D},\quad \alpha \in \mathbb {R}$ Inner product
$\langle x,y\rangle ={\frac {1}{2D}}\sum _{i=1}^{D}\sum _{j=1}^{D}\log {\frac {x_{i}}{x_{j}}}\log {\frac {y_{i}}{y_{j}}}\qquad \forall x,y\in S^{D}$ Under these operations alone, it is sufficient to show that the Aitchison simplex forms a Euclidean vector space.

Orthonormal bases

Since the Aitchison simplex forms a finite Hilbert space, it is possible to construct orthonormal bases in the simplex. Every composition can be decomposed as follows

$x=\bigoplus _{i=1}^{D}x_{i}\odot e_{i}$ Where $e_{1},\ldots ,e_{D-1}$ forms an orthonormal basis in the simplex.

Linear transformations

There are three well-characterized isomorphisms that transform from the Aitchison simplex to real space. All of these transforms satisfy linearity and as given below

The additive log ratio (alr) transform is an where $\operatorname {alr} :S^{D}\rightarrow \mathbb {R} ^{D-1}$ . This is given by

$\operatorname {alr} (x)=\left[\log {\frac {x_{1}}{x_{D}}}\cdots \log {\frac {x_{D-1}}{x_{D}}}\right]$ The choice of denominator component is arbitrary, and could be any specified component. This transform is commonly used in chemistry with measurements such as pH. In addition, this is the transform most commonly used for multinomial logistic regression. The alr transform is not an isometry, meaning that distances on transformed values will not be equivalent to distances on the original compositions in the simplex.

Center logratio transform

The center log ratio (clr) transform is both an isomorphism and an isometry where $\operatorname {clr} :S^{D}\rightarrow \mathbb {U} ,\quad \mathbb {U} \subset \mathbb {R} ^{D}$ $\operatorname {clr} (x)=\left[\log {\frac {x_{1}}{g(x)}}\cdots \log {\frac {x_{D}}{g(x)}}\right]$ The inverse of this function is also known as the softmax function commonly used in neural networks.

Isometric logratio transform

The isometric log ratio (ilr) transform is both an isomorphism and an isometry where $\operatorname {ilr} :S^{D}\rightarrow \mathbb {R} ^{D-1}$ $\operatorname {ilr} (x)={\big [}\langle x,e_{1}\rangle ,\ldots ,\langle x,e_{D-1}\rangle {\big ]}$ There are multiple ways to construct orthonormal bases, including using the Gram–Schmidt orthogonalization or singular-value decomposition of clr transformed data. Another alternative is to construct log contrasts from a bifurcating tree. If are given a bifurcating tree, we can construct a basis from the internal nodes in the tree. A representation of a tree in terms of its orthogonal components. l represents an internal node, an element of the orthonormal basis. This is a precursor to using the tree as a scaffold for the ilr transform

Each vector in the basis would be determined as follows

$e_{\ell }=C[\exp(\,\underbrace {0,\ldots ,0} _{k},\underbrace {a,\ldots ,a} _{r},\underbrace {b,\ldots ,b} _{s},\underbrace {0,\ldots ,0} _{t}\,)]$ The elements within each vector are given as follows

$a={\frac {\sqrt {s}}{\sqrt {r(r+s)}}}\quad {\text{and}}\quad b={\frac {-{\sqrt {r}}}{\sqrt {s(r+s)}}}$ where $k,r,s,t$ are the respective number of tips in the corresponding subtrees shown in the figure. It can be shown that the resulting basis is orthonormal

Once the basis $\Psi$ is built, the ilr transform can be calculated as follows

$\operatorname {ilr} (x)=\operatorname {clr} (x)\Psi ^{T}$ where each element in the ilr transformed data is of the following form

$b_{i}={\sqrt {\frac {rs}{r+s}}}\log {\frac {g(x_{R})}{g(x_{S})}}$ where $x_{R}$ and $x_{S}$ are the set of values corresponding to the tips in the subtrees $R$ and $S$ Examples

• In chemistry, compositions can be expressed as molar concentrations of each component. As the sum of all concentrations is not determined, the whole composition of D parts is needed and thus expressed as a vector of D molar concentrations. These compositions can be translated into weight per cent multiplying each component by the appropriated constant.
• In demography, a town may be a compositional data point in a sample of towns; a town in which 35% of the people are Christians, 55% are Muslims, 6% are Jews, and the remaining 4% are others would correspond to the quadruple [0.35, 0.55, 0.06, 0.04]. A data set would correspond to a list of towns.
• In geology, a rock composed of different minerals may be a compositional data point in a sample of rocks; a rock of which 10% is the first mineral, 30% is the second, and the remaining 60% is the third would correspond to the triple [0.1, 0.3, 0.6]. A data set would contain one such triple for each rock in a sample of rocks.
• In high throughput sequencing, data obtained are count compositions since the capacity of the machine determines the number of reads observed. These reduce to probabilities of observing a feature given the sequencing depth.
• In probability and statistics, a partition of the sampling space into disjoint events is described by the probabilities assigned to such events. The vector of D probabilities can be considered as a composition of D parts. As they add to one, one probability can be suppressed and the composition is completely determined.
• In a survey, the proportions of people positively answering some different items can be expressed as percentages. As the total amount is identified as 100, the compositional vector of D components can be defined using only D − 1 components, assuming that the remaining component is the percentage needed for the whole vector to add to 100.