Manifold hypothesis

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

In theoretical computer science and the study of machine learning, the manifold hypothesis is the hypothesis that many high-dimensional data sets that occur in the real world actually lie along low-dimensional latent manifolds inside that high-dimensional space.[1][2][3] As a consequence of the manifold hypothesis, many data sets that appear to initially require many variables to describe, can actually be described by a comparatively small number of variables, likened to the local coordinate system of the underlying manifold. It is suggested that this principle underpins the effectiveness of machine learning algorithms in describing high-dimensional data sets by considering a few common features.

The manifold hypothesis is related to the effectiveness of nonlinear dimensionality reduction techniques in machine learning. Many techniques of dimensional reduction make the assumption that data lies along a low-dimensional submanifold, such as manifold sculpting, manifold alignment, and manifold regularization.

References[edit]

  1. ^ Cayton, L., 2005. Algorithms for manifold learning. Univ. of California at San Diego Tech. Rep, 12(1-17), p.1.
  2. ^ Fefferman, Charles; Mitter, Sanjoy; Narayanan, Hariharan (2016-02-09). "Testing the manifold hypothesis". Journal of the American Mathematical Society. 29 (4): 983–1049. doi:10.1090/jams/852. ISSN 0894-0347.
  3. ^ Olah, Christopher. 2014. Blog: Neural Networks, Manifolds, and Topology. Available: https://colah.github.io/posts/2014-03-NN-Manifolds-Topology/

Further reading[edit]

  • Brown, Bradley C. A.; et al. (2022). "The Union of Manifolds Hypothesis and its Implications for Deep Generative Modelling". arXiv:2207.02862. {{cite journal}}: Cite journal requires |journal= (help)