Universal approximation theorem

In the mathematical theory of artificial neural networks, universal approximation theorems are results that establish the density of an algorithmically generated class of functions within a given function space of interest. Typically, these results concern the approximation capabilities of the feedforward architecture on the space of continuous functions between two Euclidean spaces, and the approximation is with respect to the compact convergence topology. The capability of the random neural network as a universal aproximator for continuous and bounded functions was established by Erol Gelenbe, Z. H. Mao and Y. D. Li in 1999. The random neural network model had been previously introduced by Gelenbe in 1999.

However, there are also a variety of results between non-Euclidean spaces and other commonly used architectures and, more generally, algorithmically generated sets of functions, such as the convolutional neural network (CNN) architecture, radial basis-functions, or neural networks with specific properties. Most universal approximation theorems can be parsed into two classes. The first quantifies the approximation capabilities of neural networks with an arbitrary number of artificial neurons ("arbitrary width" case) and the second focuses on the case with an arbitrary number of hidden layers, each containing a limited number of artificial neurons ("arbitrary depth" case).

Universal approximation theorems imply that neural networks can represent a wide variety of interesting functions when given appropriate weights. On the other hand, they typically do not provide a construction for the weights, but merely state that such a construction is possible.

History

One of the first versions of the arbitrary width case was proven by George Cybenko in 1989 for sigmoid activation functions. Kurt Hornik showed in 1991 that it is not the specific choice of the activation function, but rather the multilayer feed-forward architecture itself which gives neural networks the potential of being universal approximators. Moshe Leshno et al in 1993 and later Allan Pinkus in 1999 showed that the universal approximation property, is equivalent to having a nonpolynomial activation function.

The arbitrary depth case was also studied by a number of authors, such as Zhou Lu et al in 2017, Boris Hanin and Mark Sellke in 2018, and Patrick Kidger and Terry Lyons in 2020. The result minimal width per layer was refined in 2020 for residual networks.

Several extensions of the theorem exist, such as to discontinuous activation functions, noncompact domains, certifiable networks, and alternative network architectures and topologies.

Limitations

No collection of neural nets are capable of learning a structural causal model, i.e. "an arbitrarily complex and expressive neural net is unable to predict the effects of interventions given observational data alone", even with universal approximability of neural networks, according to the causal hierarchy theorem. A new special type of them, a neural causal model, amenable to gradient descent, has however been introduced and an algorithm "both sufficient and necessary to determine whether a causal effect can be learned from data" has been developed.

Arbitrary-width case

The classical form of the universal approximation theorem for arbitrary width and bounded depth is as follows. It extends the classical results of George Cybenko and Kurt Hornik.

Universal approximation theorem: Fix a continuous function $\sigma :\mathbb {R} \rightarrow \mathbb {R}$ (activation function) and positive integers $d,D$ . The function $\sigma$ is not a polynomial if and only if, for every continuous function $f:\mathbb {R} ^{d}\to \mathbb {R} ^{D}$ (target function), every compact subset $K$ of $\mathbb {R} ^{d}$ , and every $\epsilon >0$ there exists a continuous function $f_{\epsilon }:\mathbb {R} ^{d}\to \mathbb {R} ^{D}$ (the layer output) with representation

$f_{\epsilon }=W_{2}\circ \sigma \circ W_{1},$ where $W_{2},W_{1}$ are composable affine maps and $\circ$ denotes component-wise composition, such that the approximation bound

$\sup _{x\in K}\,\|f(x)-f_{\epsilon }(x)\|<\varepsilon$ holds for any $\epsilon$ arbitrarily small (distance from $f$ to $f_{\epsilon }$ can be infinitely small).

The theorem states that the result of first layer $f_{\epsilon }$ can approximate any well-behaved function $f$ . Such a well-behaved function can also be approximated by a network of greater depth by using the same construction for the first layer and approximating the identity function with later layers.

Arbitrary-depth case

The 'dual' versions of the theorem consider networks of bounded width and arbitrary depth. A variant of the universal approximation theorem was proved for the arbitrary depth case by Zhou Lu et al. in 2017. They showed that networks of width n+4 with ReLU activation functions can approximate any Lebesgue integrable function on n-dimensional input space with respect to $L^{1}$ distance if network depth is allowed to grow. It was also shown that if the width was less than or equal to n, this general expressive power to approximate any Lebesgue integrable function was lost. In the same paper it was shown that ReLU networks with width n+1 were sufficient to approximate any continuous function of n-dimensional input variables. The following refinement, specifies the optimal minimum width for which such an approximation is possible and is due to.

Universal approximation theorem (L1 distance, ReLU activation, arbitrary depth, minimal width). For any Bochner–Lebesgue p-integrable function $f:\mathbb {R} ^{n}\rightarrow \mathbb {R} ^{m}$ and any $\epsilon >0$ , there exists a fully-connected ReLU network $F$ of width exactly $d_{m}=\max\{{n+1},m\}$ , satisfying

$\int _{\mathbb {R} ^{n}}\left\|f(x)-F_{}(x)\right\|^{p}\mathrm {d} x<\epsilon$ .

Moreover, there exists a function $f\in L^{p}(\mathbb {R} ^{n},\mathbb {R} ^{m})$ and some $\epsilon >0$ , for which there is no fully-connected ReLU network of width less than $d_{m}=\max\{{n+1},m\}$ satisfying the above approximation bound.

Together, the central result of  yields the following universal approximation theorem for networks with bounded width.

Universal approximation theorem (non-affine activation, arbitrary depth, constrained width). Let ${\mathcal {X}}$ be a compact subset of $\mathbb {R} ^{d}$ . Let $\sigma :\mathbb {R} \to \mathbb {R}$ be any non-affine continuous function which is continuously differentiable at at least one point, with nonzero derivative at that point. Let ${\mathcal {N}}_{d,D:d+D+2}^{\sigma }$ denote the space of feed-forward neural networks with $d$ input neurons, $D$ output neurons, and an arbitrary number of hidden layers each with $d+D+2$ neurons, such that every hidden neuron has activation function $\sigma$ and every output neuron has the identity as its activation function, with input layer $\phi$ , and output layer $\rho$ . Then given any $\varepsilon >0$ and any $f\in C({\mathcal {X}},\mathbb {R} ^{D})$ , there exists ${\hat {f}}\in {\mathcal {N}}_{d,D:d+D+2}^{\sigma }$ such that

$\sup _{x\in {\mathcal {X}}}\,\left\|{\hat {f}}(x)-f(x)\right\|<\varepsilon .$ In other words, ${\mathcal {N}}$ is dense in $C({\mathcal {X}};\mathbb {R} ^{D})$ with respect to the topology of uniform convergence.

Certain necessary conditions for the bounded width, arbitrary depth case have been established, but there is still a gap between the known sufficient and necessary conditions.

Graph input

Achieving useful universal function approximation on graphs (or rather on graph isomorphism classes) has been a longstanding problem. The popular graph convolutional neural networks (GCNs or GNNs) can be made as discriminative as the Weisfeiler–Leman graph isomorphism test. In 2020, a universal approximation theorem result was established by Brüel-Gabrielsson, showing that graph representation with certain injective properties is sufficient for universal function approximation on bounded graphs and restricted universal function approximation on unbounded graphs, with an accompanying $O($ #edges$\times$ #nodes$)$ -runtime method that performed at state of the art on a collection of benchmarks.