Neural gas

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Not to be confused with Nerve gas.


Neural gas is an artificial neural network, inspired by the self-organizing map and introduced in 1991 by Thomas Martinetz and Klaus Schulten.[1] The neural gas is a simple algorithm for finding optimal data representations based on feature vectors. The algorithm was coined "neural gas" because of the dynamics of the feature vectors during the adaptation process, which distribute themselves like a gas within the data space. It is applied where data compression or vector quantization is an issue, for example speech recognition,[2] image processing[3] or pattern recognition. As a robustly converging alternative to the k-means clustering it is also used for cluster analysis.[4]

Algorithm[edit]

Given a probability distribution P(x) of data vectors x and a finite number of feature vectors wi, i=1,...,N.

With each time step t a data vector randomly chosen from P is presented. Subsequently, the distance order of the feature vectors to the given data vector x is determined. i0 denotes the index of the closest feature vector, i1 the index of the second closest feature vector etc. and iN-1 the index of the feature vector most distant to x. Then each feature vector (k=0,...,N-1) is adapted according to

 w_{i_k}^{t+1} = w_{i_k}^{t} + \epsilon\cdot  e^{-k/\lambda}\cdot (x-w_{i_k}^{t})

with ε as the adaptation step size and λ as the so-called neighborhood range. ε and λ are reduced with increasing t. After sufficiently many adaptation steps the feature vectors cover the data space with minimum representation error.[5]

The adaptation step of the neural gas can be interpreted as gradient descent on a cost function. By adapting not only the closest feature vector but all of them with a step size decreasing with increasing distance order, compared to (online) k-means clustering a much more robust convergence of the algorithm can be achieved. The neural gas model does not delete a node and also does not create new nodes.

Further reading[edit]

  • T. Martinetz, S. Berkovich, and K. Schulten. "Neural-gas" Network for Vector Quantization and its Application to Time-Series Prediction. IEEE-Transactions on Neural Networks, 4(4):558-569, 1993.
  • T. Martinetz and K. Schulten. Topology representing networks. Neural Networks, 7(3):507-522, 1994.

References[edit]

  1. ^ Thomas Martinetz and Klaus Schulten (1991). "Artificial Neural Networks". Elsevier. pp. 397–402.  |chapter= ignored (help)
  2. ^ F. Curatelli and O. Mayora-Iberra (2000). Osvaldo Cairó, L. Enrique Sucar, Francisco J. Cantú-Ortiz, ed. "MICAI 2000: Advances in artificial intelligence : Mexican International Conference on Artificial Intelligence, Acapulco, Mexico, April 2000 : proceedings". Springer. p. 109. ISBN 978-3-540-67354-5.  |chapter= ignored (help)
  3. ^ Angelopoulou, Anastassia and Psarrou, Alexandra and Garcia Rodriguez, Jose and Revett, Kenneth (2005). Yanxi Liu, Tianzi Jiang, Changshui Zhang, ed. "Computer vision for biomedical image applications: first international workshop, CVBIA 2005, Beijing, China, October 21, 2005 : proceedings". Springer. p. 210. doi:10.1007/11569541_22. ISBN 978-3-540-29411-5.  |chapter= ignored (help)
  4. ^ Fernando Canales and Max Chacon (2007). Luis Rueda, Domingo Mery, Josef Kittler, International Association for Pattern Recognition, ed. "Progress in pattern recognition, image analysis and applications: 12th Iberoamerican Congress on Pattern Recognition, CIARP 2007, Viña del Mar-Valparaiso, Chile, November 13–16, 2007 ; proceedings". Springer. pp. 684–693. doi:10.1007/978-3-540-76725-1_71. ISBN 978-3-540-76724-4.  |chapter= ignored (help)
  5. ^ http://wwwold.ini.rub.de/VDM/research/gsn/JavaPaper/img187.gif

External links[edit]