Jump to content

# Elastic map

Elastic maps provide a tool for nonlinear dimensionality reduction. By their construction, they are a system of elastic springs embedded in the data space.[1] This system approximates a low-dimensional manifold. The elastic coefficients of this system allow the switch from completely unstructured k-means clustering (zero elasticity) to the estimators located closely to linear PCA manifolds (for high bending and low stretching modules). With some intermediate values of the elasticity coefficients, this system effectively approximates non-linear principal manifolds. This approach is based on a mechanical analogy between principal manifolds, that are passing through "the middle" of the data distribution, and elastic membranes and plates. The method was developed by A.N. Gorban, A.Y. Zinovyev and A.A. Pitenko in 1996–1998.

## Energy of elastic map

Let ${\displaystyle {\mathcal {S}}}$ be a data set in a finite-dimensional Euclidean space. Elastic map is represented by a set of nodes ${\displaystyle {\bf {w}}_{j}}$ in the same space. Each datapoint ${\displaystyle s\in {\mathcal {S}}}$ has a host node, namely the closest node ${\displaystyle {\bf {w}}_{j}}$ (if there are several closest nodes then one takes the node with the smallest number). The data set ${\displaystyle {\mathcal {S}}}$ is divided into classes ${\displaystyle K_{j}=\{s\ |\ {\bf {w}}_{j}{\mbox{ is a host of }}s\}}$.

The approximation energy D is the distortion

${\displaystyle D={\frac {1}{2}}\sum _{j=1}^{k}\sum _{s\in K_{j}}\|s-{\bf {w}}_{j}\|^{2}}$,

which is the energy of the springs with unit elasticity which connect each data point with its host node. It is possible to apply weighting factors to the terms of this sum, for example to reflect the standard deviation of the probability density function of any subset of data points ${\displaystyle \{s_{i}\}}$.

On the set of nodes an additional structure is defined. Some pairs of nodes, ${\displaystyle ({\bf {w}}_{i},{\bf {w}}_{j})}$, are connected by elastic edges. Call this set of pairs ${\displaystyle E}$. Some triplets of nodes, ${\displaystyle ({\bf {w}}_{i},{\bf {w}}_{j},{\bf {w}}_{k})}$, form bending ribs. Call this set of triplets ${\displaystyle G}$.

The stretching energy is ${\displaystyle U_{E}={\frac {1}{2}}\lambda \sum _{({\bf {w}}_{i},{\bf {w}}_{j})\in E}\|{\bf {w}}_{i}-{\bf {w}}_{j}\|^{2}}$,
The bending energy is ${\displaystyle U_{G}={\frac {1}{2}}\mu \sum _{({\bf {w}}_{i},{\bf {w}}_{j},{\bf {w}}_{k})\in G}\|{\bf {w}}_{i}-2{\bf {w}}_{j}+{\bf {w}}_{k}\|^{2}}$,

where ${\displaystyle \lambda }$ and ${\displaystyle \mu }$ are the stretching and bending moduli respectively. The stretching energy is sometimes referred to as the membrane, while the bending energy is referred to as the thin plate term.[5]

For example, on the 2D rectangular grid the elastic edges are just vertical and horizontal edges (pairs of closest vertices) and the bending ribs are the vertical or horizontal triplets of consecutive (closest) vertices.

The total energy of the elastic map is thus ${\displaystyle U=D+U_{E}+U_{G}.}$

The position of the nodes ${\displaystyle \{{\bf {w}}_{j}\}}$ is determined by the mechanical equilibrium of the elastic map, i.e. its location is such that it minimizes the total energy ${\displaystyle U}$.

## Expectation-maximization algorithm

For a given splitting of dataset ${\displaystyle {\mathcal {S}}}$ in classes ${\displaystyle K_{j}}$, minimization of the quadratic functional ${\displaystyle U}$ is a linear problem with the sparse matrix of coefficients. Therefore, similar to principal component analysis or k-means, a splitting method is used:

• For given ${\displaystyle \{{\bf {w}}_{j}\}}$ find ${\displaystyle \{K_{j}\}}$;
• For given ${\displaystyle \{K_{j}\}}$ minimize ${\displaystyle U}$ and find ${\displaystyle \{{\bf {w}}_{j}\}}$;
• If no change, terminate.

This expectation-maximization algorithm guarantees a local minimum of ${\displaystyle U}$. For improving the approximation various additional methods are proposed. For example, the softening strategy is used. This strategy starts with a rigid grids (small length, small bending and large elasticity modules ${\displaystyle \lambda }$ and ${\displaystyle \mu }$ coefficients) and finishes with soft grids (small ${\displaystyle \lambda }$ and ${\displaystyle \mu }$). The training goes in several epochs, each epoch with its own grid rigidness. Another adaptive strategy is growing net: one starts from a small number of nodes and gradually adds new nodes. Each epoch goes with its own number of nodes.

## Applications

Most important applications of the method and free software[3] are in bioinformatics[7][8] for exploratory data analysis and visualisation of multidimensional data, for data visualisation in economics, social and political sciences,[9] as an auxiliary tool for data mapping in geographic informational systems and for visualisation of data of various nature.

The method is applied in quantitative biology for reconstructing the curved surface of a tree leaf from a stack of light microscopy images.[10] This reconstruction is used for quantifying the geodesic distances between trichomes and their patterning, which is a marker of the capability of a plant to resist to pathogenes.

Recently, the method is adapted as a support tool in the decision process underlying the selection, optimization, and management of financial portfolios.[11]

The method of elastic maps has been systematically tested and compared with several machine learning methods on the applied problem of identification of the flow regime of a gas-liquid flow in a pipe.[12] There are various regimes: Single phase water or air flow, Bubbly flow, Bubbly-slug flow, Slug flow, Slug-churn flow, Churn flow, Churn-annular flow, and Annular flow. The simplest and most common method used to identify the flow regime is visual observation. This approach is, however, subjective and unsuitable for relatively high gas and liquid flow rates. Therefore, the machine learning methods are proposed by many authors. The methods are applied to differential pressure data collected during a calibration process. The method of elastic maps provided a 2D map, where the area of each regime is represented. The comparison with some other machine learning methods is presented in Table 1 for various pipe diameters and pressure.

Calibration Testing Larger diameter Higher pressure 100 98.2 100 100 99.1 89.2 76.2 70.5 100 88.5 61.7 70.5 94.9 94.2 83.6 88.6 100 94.6 82.1 84.1

Here, ANN stands for the backpropagation artificial neural networks, SVM stands for the support vector machine, SOM for the self-organizing maps. The hybrid technology was developed for engineering applications.[13] In this technology, elastic maps are used in combination with Principal Component Analysis (PCA), Independent Component Analysis (ICA) and backpropagation ANN.

The textbook[14] provides a systematic comparison of elastic maps and self-organizing maps (SOMs) in applications to economic and financial decision-making.

## References

1. ^ a b A. N. Gorban, A. Y. Zinovyev, Principal Graphs and Manifolds, In: Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods and Techniques, Olivas E.S. et al. Eds. Information Science Reference, IGI Global: Hershey, PA, USA, 2009. 28–59.
2. ^ Wang, Y., Klijn, J.G., Zhang, Y., Sieuwerts, A.M., Look, M.P., Yang, F., Talantov, D., Timmermans, M., Meijer-van Gelder, M.E., Yu, J. et al.: Gene expression profiles to predict distant metastasis of lymph-node-negative primary breast cancer. Lancet 365, 671–679 (2005); Data online
3. ^ a b A. Zinovyev, ViDaExpert - Multidimensional Data Visualization Tool (free for non-commercial use). Institut Curie, Paris.
4. ^ A. Zinovyev, ViDaExpert overview, IHES (Institut des Hautes Études Scientifiques), Bures-Sur-Yvette, Île-de-France.
5. ^ Michael Kass, Andrew Witkin, Demetri Terzopoulos, Snakes: Active contour models, Int.J. Computer Vision, 1988 vol 1-4 pp.321-331
6. ^ A. N. Gorban, A. Zinovyev, Principal manifolds and graphs in practice: from molecular biology to dynamical systems, International Journal of Neural Systems, Vol. 20, No. 3 (2010) 219–232.
7. ^ A.N. Gorban, B. Kegl, D. Wunsch, A. Zinovyev (Eds.), Principal Manifolds for Data Visualisation and Dimension Reduction, LNCSE 58, Springer: Berlin – Heidelberg – New York, 2007. ISBN 978-3-540-73749-0
8. ^ M. Chacón, M. Lévano, H. Allende, H. Nowak, Detection of Gene Expressions in Microarrays by Applying Iteratively Elastic Neural Net, In: B. Beliczynski et al. (Eds.), Lecture Notes in Computer Sciences, Vol. 4432, Springer: Berlin – Heidelberg 2007, 355–363.
9. ^ A. Zinovyev, Data visualization in political and social sciences, In: SAGE "International Encyclopedia of Political Science", Badie, B., Berg-Schlosser, D., Morlino, L. A. (Eds.), 2011.
10. ^ H. Failmezger, B. Jaegle, A. Schrader, M. Hülskamp, A. Tresch., Semi-automated 3D leaf reconstruction and analysis of trichome patterning from light microscopic images, PLoS Computational Biology, 2013, 9(4):e1003029.
11. ^ M. Resta, Portfolio optimization through elastic maps: Some evidence from the Italian stock exchange, Knowledge-Based Intelligent Information and Engineering Systems, B. Apolloni, R.J. Howlett and L. Jain (eds.), Lecture Notes in Computer Science, Vol. 4693, Springer: Berlin – Heidelberg, 2010, 635-641.
12. ^ H. Shaban, S. Tavoularis, Identification of flow regime in vertical upward air–water pipe flow using differential pressure signals and elastic maps, International Journal of Multiphase Flow 61 (2014) 62-72.
13. ^ H. Shaban, S. Tavoularis, Measurement of gas and liquid flow rates in two-phase pipe flows by the application of machine learning techniques to differential pressure signals, International Journal of Multiphase Flow 67(2014), 106-117
14. ^ M. Resta, Computational Intelligence Paradigms in Economic and Financial Decision Making, Series Intelligent Systems Reference Library, Volume 99, Springer International Publishing, Switzerland 2016.