Universal approximation theorem

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

In the mathematical theory of artificial neural networks, the universal approximation theorem states[1] that a feed-forward network with a single hidden layer containing a finite number of neurons can approximate continuous functions on compact subsets of Rn, under mild assumptions on the activation function. The theorem thus states that simple neural networks can represent a wide variety of interesting functions when given appropriate parameters; however, it does not touch upon the algorithmic learnability of those parameters.

One of the first versions of the theorem was proved by George Cybenko in 1989 for sigmoid activation functions.[2] It was later shown in [3] that the class of deep neural networks is a universal approximator if and only if the activation function is not polynomial.

Kurt Hornik showed in 1991[4] that it is not the specific choice of the activation function, but rather the multilayer feedforward architecture itself which gives neural networks the potential of being universal approximators. The output units are always assumed to be linear.

Although feed-forward networks with a single hidden layer are universal approximators, the width of such networks has to be exponentially large. In 2017 Lu et al.[5] proved universal approximation theorem for width-bounded deep neural networks. In particular, they showed that width-n+4 networks with ReLU activation functions can approximate any Lebesgue integrable function on n-dimensional input space with respect to distance if network depth is allowed to grow. They also showed the limited expressive power if the width is less than or equal to n. All Lebesgue integrable functions except for a zero measure set cannot be approximated by width-n ReLU networks.

A later result of [5] showed that ReLU networks with width n+1 is sufficient to approximate any continuous function of n-dimensional input variables.[6]

Formal statement[edit]

The universal approximation theorem can be expressed mathematically:[2][4][7][8]

Unbounded Width Case[edit]

Let be a nonconstant, bounded, and continuous function (called the activation function). Let denote the m-dimensional unit hypercube . The space of real-valued continuous functions on is denoted by . Then, given any and any function , there exist an integer , real constants and real vectors for , such that we may define:

as an approximate realization of the function ; that is,

for all . In other words, functions of the form are dense in .

This still holds when replacing with any compact subset of .

Bounded Width Case[edit]

The universal approximation theorem for width-bounded networks can be expressed mathematically as follows:[5]

For any Lebesgue-integrable function and any , there exists a fully-connected ReLU network with width , such that the function represented by this network satisfies

See also[edit]

References[edit]

  1. ^ Balázs Csanád Csáji (2001) Approximation with Artificial Neural Networks; Faculty of Sciences; Eötvös Loránd University, Hungary
  2. ^ a b Cybenko, G. (1989) "Approximations by superpositions of sigmoidal functions", Mathematics of Control, Signals, and Systems, 2(4), 303–314. doi:10.1007/BF02551274
  3. ^ Leshno, Moshe; Lin, Vladimir Ya.; Pinkus, Allan; Schocken, Shimon (January 1993). "Multilayer feedforward networks with a nonpolynomial activation function can approximate any function". Neural Networks. 6 (6): 861–867. doi:10.1016/S0893-6080(05)80131-5.
  4. ^ a b Kurt Hornik (1991) "Approximation Capabilities of Multilayer Feedforward Networks", Neural Networks, 4(2), 251–257. doi:10.1016/0893-6080(91)90009-T
  5. ^ a b c Lu, Z., Pu, H., Wang, F., Hu, Z., & Wang, L. (2017). The Expressive Power of Neural Networks: A View from the Width. Neural Information Processing Systems, 6231-6239.
  6. ^ Hanin, B. (2018). Approximating Continuous Functions by ReLU Nets of Minimal Width. arXiv preprint arXiv:1710.11278.
  7. ^ Haykin, Simon (1998). Neural Networks: A Comprehensive Foundation, Volume 2, Prentice Hall. ISBN 0-13-273350-1.
  8. ^ Hassoun, M. (1995) Fundamentals of Artificial Neural Networks MIT Press, p. 48

External links[edit]