Universal approximation theorem
In the mathematical theory of neural networks, the universal approximation theorem states that a feed-forward network with a single hidden layer containing a finite number of neurons, the simplest form of the multilayer perceptron, is a universal approximator among continuous functions on compact subsets of Rn, under mild assumptions on the activation function.
Kurt Hornik showed in 1991 that it is not the specific choice of the activation function, but rather the multilayer feedforward architecture itself which gives neural networks the potential of being universal approximators. The output units are always assumed to be linear. For notational convenience, only the single output case will be shown. The general case can easily be deduced from the single output case.
Let φ(·) be a nonconstant, bounded, and monotonically-increasing continuous function. Let Im denote the m-dimensional unit hypercube [0,1]m. The space of continuous functions on Im is denoted by C(Im). Then, given any function f ∈ C(Im) and є > 0, there exist an integer N and real constants αi, bi ∈ R, wi ∈ Rm, where i = 1, ..., N such that we may define:
as an approximate realization of the function f where f is independent of φ; that is,
for all x ∈ Im. In other words, functions of the form F(x) are dense in C(Im).
- Balázs Csanád Csáji. Approximation with Artificial Neural Networks; Faculty of Sciences; Eötvös Loránd University, Hungary
- Cybenko., G. (1989) "Approximations by superpositions of sigmoidal functions", Mathematics of Control, Signals, and Systems, 2 (4), 303-314
- Kurt Hornik (1991) "Approximation Capabilities of Multilayer Feedforward Networks", Neural Networks, 4(2), 251–257
- Haykin, Simon (1998). Neural Networks: A Comprehensive Foundation, Volume 2, Prentice Hall. ISBN 0-13-273350-1.
- Hassoun, M. (1995) Fundamentals of Artificial Neural Networks MIT Press, p. 48
|This applied mathematics-related article is a stub. You can help Wikipedia by expanding it.|