# t-distributed stochastic neighbor embedding

t-distributed stochastic neighbor embedding (t-SNE) is a machine learning algorithm for dimensionality reduction developed by Geoffrey Hinton and Laurens van der Maaten.[1] It is a nonlinear dimensionality reduction technique that is particularly well-suited for embedding high-dimensional data into a space of two or three dimensions, which can then be visualized in a scatter plot. Specifically, it models each high-dimensional object by a two- or three-dimensional point in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points.

The t-SNE algorithm comprises two main stages. First, t-SNE constructs a probability distribution over pairs of high-dimensional objects in such a way that similar objects have a high probability of being picked, whilst dissimilar points have an extremely small probability of being picked. Second, t-SNE defines a similar probability distribution over the points in the low-dimensional map, and it minimizes the Kullback–Leibler divergence between the two distributions with respect to the locations of the points in the map. Note that whilst the original algorithm uses the Euclidean distance between objects as the base of its similarity metric, this should be changed as appropriate.

t-SNE has been used in a wide range of applications, including computer security research,[2] music analysis,[3] cancer research,[4] bioinformatics,[5] and biomedical signal processing.[6]

## Details

Given a set of ${\displaystyle N}$ high-dimensional objects ${\displaystyle \mathbf {x} _{1},\dots ,\mathbf {x} _{N}}$, t-SNE first computes probabilities ${\displaystyle p_{ij}}$ that are proportional to the similarity of objects ${\displaystyle \mathbf {x} _{i}}$ and ${\displaystyle \mathbf {x} _{j}}$, as follows:

${\displaystyle p_{j\mid i}={\frac {\exp(-\lVert \mathbf {x} _{i}-\mathbf {x} _{j}\rVert ^{2}/2\sigma _{i}^{2})}{\sum _{k\neq i}\exp(-\lVert \mathbf {x} _{i}-\mathbf {x} _{k}\rVert ^{2}/2\sigma _{i}^{2})}},}$

As Van der Maatend and Hinton explained : "The similarity of datapoint ${\displaystyle x_{j}}$ to datapoint ${\displaystyle x_{i}}$ is the conditional probability, ${\displaystyle p_{j|i}}$, that ${\displaystyle x_{i}}$ would pick ${\displaystyle x_{j}}$ as its neighbor if neighbors were picked in proportion to their probability density under a Gaussian centered at ${\displaystyle x_{i}}$."[1]

${\displaystyle p_{ij}={\frac {p_{j\mid i}+p_{i\mid j}}{2N}}}$

The bandwidth of the Gaussian kernels ${\displaystyle \sigma _{i}}$, is set in such a way that the perplexity of the conditional distribution equals a predefined perplexity using the bisection method. As a result, the bandwidth is adapted to the density of the data: smaller values of ${\displaystyle \sigma _{i}}$ are used in denser parts of the data space.

t-SNE aims to learn a ${\displaystyle d}$-dimensional map ${\displaystyle \mathbf {y} _{1},\dots ,\mathbf {y} _{N}}$ (with ${\displaystyle \mathbf {y} _{i}\in \mathbb {R} ^{d}}$) that reflects the similarities ${\displaystyle p_{ij}}$ as well as possible. To this end, it measures similarities ${\displaystyle q_{ij}}$ between two points in the map ${\displaystyle \mathbf {y} _{i}}$ and ${\displaystyle \mathbf {y} _{j}}$, using a very similar approach. Specifically, ${\displaystyle q_{ij}}$ is defined as:

${\displaystyle q_{ij}={\frac {(1+\lVert \mathbf {y} _{i}-\mathbf {y} _{j}\rVert ^{2})^{-1}}{\sum _{k\neq m}(1+\lVert \mathbf {y} _{k}-\mathbf {y} _{m}\rVert ^{2})^{-1}}}}$

Herein a heavy-tailed Student-t distribution (with one-degree of freedom, which is the same as a Cauchy distribution) is used to measure similarities between low-dimensional points in order to allow dissimilar objects to be modeled far apart in the map.

The locations of the points ${\displaystyle \mathbf {y} _{i}}$ in the map are determined by minimizing the (non-symmetric) Kullback–Leibler divergence of the distribution ${\displaystyle Q}$ from the distribution ${\displaystyle P}$, that is:

${\displaystyle KL(P||Q)=\sum _{i\neq j}p_{ij}\log {\frac {p_{ij}}{q_{ij}}}}$

The minimization of the Kullback–Leibler divergence with respect to the points ${\displaystyle \mathbf {y} _{i}}$ is performed using gradient descent. The result of this optimization is a map that reflects the similarities between the high-dimensional inputs well.

## References

1. ^ a b van der Maaten, L.J.P.; Hinton, G.E. (Nov 2008). "Visualizing High-Dimensional Data Using t-SNE" (PDF). Journal of Machine Learning Research. 9: 2579–2605.
2. ^ Gashi, I.; Stankovic, V.; Leita, C.; Thonnard, O. (2009). "An Experimental Study of Diversity with Off-the-shelf AntiVirus Engines". Proceedings of the IEEE International Symposium on Network Computing and Applications: 4–11.
3. ^ Hamel, P.; Eck, D. (2010). "Learning Features from Music Audio with Deep Belief Networks". Proceedings of the International Society for Music Information Retrieval Conference: 339–344.
4. ^ Jamieson, A.R.; Giger, M.L.; Drukker, K.; Lui, H.; Yuan, Y.; Bhooshan, N. (2010). "Exploring Nonlinear Feature Space Dimension Reduction and Data Representation in Breast CADx with Laplacian Eigenmaps and t-SNE". Medical Physics. 37 (1): 339–351. doi:10.1118/1.3267037.
5. ^ Wallach, I.; Liliean, R. (2009). "The Protein-Small-Molecule Database, A Non-Redundant Structural Resource for the Analysis of Protein-Ligand Binding". Bioinformatics. 25 (5): 615–620. doi:10.1093/bioinformatics/btp035. PMID 19153135.
6. ^ Birjandtalab, J.; Pouyan, M. B.; Nourani, M. (2016-02-01). "Nonlinear dimension reduction for EEG-based epileptic seizure detection". 2016 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI): 595–598. doi:10.1109/BHI.2016.7455968.