# Uniform convergence

In the mathematical field of analysis, uniform convergence is a type of convergence stronger than pointwise convergence. A sequence of functions ${\displaystyle (f_{n})}$ converges uniformly to a limiting function ${\displaystyle f}$ on a set (for which a distance function is well-defined, aka metric space) ${\displaystyle E}$ if, given any arbitrarily small positive number ${\displaystyle \epsilon }$, an ${\displaystyle N}$ can be found such that each of the functions ${\displaystyle f_{N},f_{N+1},f_{N+2},\ldots }$ differ from ${\displaystyle f}$ by no more than ${\displaystyle \epsilon }$ at every point ${\displaystyle x}$ in ${\displaystyle E}$. Loosely speaking, this means that ${\displaystyle f_{n}}$ converges to ${\displaystyle f}$ at a "uniform" speed on its entire domain, independent of ${\displaystyle x}$.

Formulated more precisely, we say that ${\displaystyle f_{n}}$ converges to ${\displaystyle f}$ uniformly on ${\displaystyle E}$ if, given ${\displaystyle \epsilon >0}$, there can be found an ${\displaystyle N(\epsilon )}$ independent of ${\displaystyle x}$, such that ${\displaystyle d(f_{n}(x),f(x))<\epsilon }$ for all ${\displaystyle x\in E}$ whenever ${\displaystyle n\geq N(\epsilon )}$. In contrast, we say that ${\displaystyle f_{n}}$ converges to ${\displaystyle f}$ pointwise, if there exists an ${\displaystyle N(\epsilon ,x)}$, dependent on both ${\displaystyle \epsilon >0}$ and ${\displaystyle x\in E}$, such that ${\displaystyle d(f_{n}(x),f(x))<\epsilon }$ when ${\displaystyle n\geq N(\epsilon ,x)}$ (where ${\displaystyle d}$ is the distance function given by the metric space). It is clear from these definitions that uniform convergence of ${\displaystyle f_{n}}$ to ${\displaystyle f}$ on ${\displaystyle E}$ implies pointwise convergence for every ${\displaystyle x\in E}$.

The notation for uniform convergence of ${\displaystyle f_{n}}$ to ${\displaystyle f}$ is not quite standardized and different authors have used a variety of symbols including (in roughly increasing order of popularity) ${\displaystyle f_{n}\rightrightarrows f}$, ${\displaystyle {\underset {n\to \infty }{\mathrm {unif\ lim} }}f_{n}=f}$, ${\displaystyle f_{n}{\overset {\mathrm {unif.} }{\longrightarrow }}f}$. Frequently, no special symbol is used, and authors simply write ${\displaystyle f_{n}\to f\ \mathrm {uniformly} }$.

The difference between the two types of convergence was not fully appreciated early in the history of calculus, leading to instances of faulty reasoning. The concept, which was first formalized by Weierstrass, is important because several properties of the functions ${\displaystyle f_{n}}$, such as continuity, Riemann integrability, and, with additional hypotheses, differentiability, are transferred to the limit ${\displaystyle f}$ if the convergence is uniform, but not necessarily if the convergence is not uniform.

Uniform convergence to a function on a given interval can be defined in terms of the uniform norm.

## History

In 1821 Augustin-Louis Cauchy published a proof that a convergent sum of continuous functions is always continuous, to which Niels Henrik Abel in 1826 found purported counterexamples in the context of Fourier series, arguing that Cauchy's proof had to be incorrect. Completely standard notions of convergence did not exist at the time, and Cauchy handled convergence using infinitesimal methods. When put into the modern language, what Cauchy proved is that a uniformly convergent sequence of continuous functions has a continuous limit. The failure of a merely pointwise-convergent limit of continuous functions to converge to a continuous function illustrates the importance of distinguishing between different types of convergence when handling sequences of functions.[1]

The term uniform convergence was probably first used by Christoph Gudermann, in an 1838 paper on elliptic functions, where he employed the phrase "convergence in a uniform way" when the "mode of convergence" of a series ${\displaystyle \textstyle {\sum _{n=1}^{\infty }f_{n}(x,\phi ,\psi )}}$ is independent of the variables ${\displaystyle \phi }$ and ${\displaystyle \psi .}$ While he thought it a "remarkable fact" when a series converged in this way, he did not give a formal definition, nor use the property in any of his proofs.[2]

Later Gudermann's pupil Karl Weierstrass, who attended his course on elliptic functions in 1839–1840, coined the term gleichmäßig konvergent (German: uniformly convergent) which he used in his 1841 paper Zur Theorie der Potenzreihen, published in 1894. Independently, similar concepts were articulated by Philipp Ludwig von Seidel[3] and George Gabriel Stokes. G. H. Hardy compares the three definitions in his paper "Sir George Stokes and the concept of uniform convergence" and remarks: "Weierstrass's discovery was the earliest, and he alone fully realized its far-reaching importance as one of the fundamental ideas of analysis."

Under the influence of Weierstrass and Bernhard Riemann this concept and related questions were intensely studied at the end of the 19th century by Hermann Hankel, Paul du Bois-Reymond, Ulisse Dini, Cesare Arzelà and others.

## Definition

Suppose ${\displaystyle E}$ is a set and ${\displaystyle f_{n}:E\to \mathbb {R} }$ (${\displaystyle n=1,2,3,\ldots }$) are real-valued functions. We say that the sequence ${\displaystyle (f_{n})_{n\in \mathbb {N} }}$ is uniformly convergent with limit ${\displaystyle f:E\to \mathbb {R} }$ on ${\displaystyle E}$ if for every ${\displaystyle \epsilon >0}$, there exists a natural number ${\displaystyle N}$ such that for all ${\displaystyle x\in E}$ and all ${\displaystyle n\geq N}$ we have ${\displaystyle |f_{n}(x)-f(x)|<\epsilon }$. Equivalently, ${\displaystyle (f_{n})_{n\in \mathbb {N} }}$ converges uniformly on ${\displaystyle E}$ in the previous sense if and only if for every ${\displaystyle \epsilon >0}$, there exists a natural number ${\displaystyle N}$ such that ${\displaystyle m\geq N,n\geq N,x\in E\implies |f_{m}(x)-f_{n}(x)|<\epsilon }$. This is the Cauchy criterion for uniform convergence.

In another equivalent formulation, if we define ${\displaystyle a_{n}=\sup _{x\in E}|f_{n}(x)-f(x)|}$, then ${\displaystyle f_{n}}$ converges to ${\displaystyle f}$ uniformly if and only if ${\displaystyle a_{n}\to 0}$ as ${\displaystyle n\to \infty }$. This last statement can be restated as uniform convergence of ${\displaystyle (f_{n})_{n\in \mathbb {N} }}$ on ${\displaystyle E}$ being equivalent to convergence of the sequence in the function space ${\displaystyle \mathbb {R} ^{E}}$ with respect to the uniform metric (also called the supremum metric), defined by ${\displaystyle d(f,g)=\sup _{x\in E}|f(x)-g(x)|}$.

The sequence ${\displaystyle (f_{n})_{n\in \mathbb {N} }}$ is said to be locally uniformly convergent with limit ${\displaystyle f}$ if ${\displaystyle E}$ is a metric space and for every ${\displaystyle x}$ in ${\displaystyle E}$, there exists an ${\displaystyle r>0}$ such that ${\displaystyle (f_{n})}$ converges uniformly on ${\displaystyle B(x,r)\cap E}$. It is easy to see that local uniform convergence implies pointwise convergence. It is also clear that uniform convergence implies local uniform convergence.

### Notes

Intuitively, a sequence of functions ${\displaystyle f_{n}}$ converges uniformly to ${\displaystyle f}$ if, given an arbitrarily small ${\displaystyle \epsilon >0}$, we can find an ${\displaystyle N\in \mathbb {N} }$ so that the functions ${\displaystyle f_{n}\ (n\geq N)}$ all fall within a "tube" of width ${\displaystyle 2\epsilon }$ centered around ${\displaystyle f}$ (i.e., between ${\displaystyle f(x)-\epsilon }$ and ${\displaystyle f(x)+\epsilon }$) for the entire domain of the function.

Note that interchanging the order of "there exists ${\displaystyle N}$" and "for all ${\displaystyle x}$" in the definition above results in a statement equivalent to the pointwise convergence of the sequence. That notion can be defined as follows: the sequence ${\displaystyle (f_{n})}$ converges pointwise with limit ${\displaystyle f}$ if and only if

for every ${\displaystyle x\in E}$ and every ${\displaystyle \epsilon >0}$, there exists a natural number ${\displaystyle N}$ such that for all ${\displaystyle n\geq N}$ one has ${\displaystyle |f_{n}(x)-f(x)|<\epsilon }$.

In explicit terms, in the case of uniform convergence, ${\displaystyle N}$ can only depend on ${\displaystyle \epsilon }$, while in the case of pointwise convergence, ${\displaystyle N}$ may depend on both ${\displaystyle \epsilon }$ and ${\displaystyle x}$. It is therefore plain that uniform convergence implies pointwise convergence. The converse is not true, as the example in the section below illustrates.

### Generalizations

One may straightforwardly extend the concept to functions SM, where (M, d) is a metric space, by replacing |fn(x) − f(x)| with d(fn(x), f(x)).

The most general setting is the uniform convergence of nets of functions SX, where X is a uniform space. We say that the net (fα) converges uniformly with limit f : SX if and only if

for every entourage V in X, there exists an α0, such that for every x in S and every α ≥ α0: (fα(x), f(x)) is in V.

The above-mentioned theorem, stating that the uniform limit of continuous functions is continuous, remains correct in these settings.

### Definition in a hyperreal setting

Uniform convergence admits a simplified definition in a hyperreal setting. Thus, a sequence ${\displaystyle f_{n}}$ converges to f uniformly if for all x in the domain of f* and all infinite n, ${\displaystyle f_{n}^{*}(x)}$ is infinitely close to ${\displaystyle f^{*}(x)}$ (see microcontinuity for a similar definition of uniform continuity).

## Examples

Given a topological space X, we can equip the space of bounded real or complex-valued functions over X with the uniform norm topology, with the uniform metric defined by ${\displaystyle d(f,g)=||f-g||_{\infty }=\sup _{x\in X}|f(x)-g(x)|}$. Then uniform convergence simply means convergence in the uniform norm topology: ${\displaystyle \lim _{n\to \infty }||f_{n}-f||_{\infty }=0}$.

The sequence of functions ${\displaystyle (f_{n})}$ with ${\displaystyle f_{n}:[0,1]\rightarrow [0,1]}$ defined by ${\displaystyle f_{n}(x)=x^{n}}$ is a classic example of a sequence of functions that converges to a function ${\displaystyle f}$ pointwise but not uniformly. To show this, we first observe that the pointwise limit of ${\displaystyle (f_{n})}$ as ${\displaystyle n\to \infty }$ is the function ${\displaystyle f}$, given by

${\displaystyle f(x)=\lim _{n\rightarrow \infty }f_{n}(x)={\begin{cases}0,&x\in [0,1);\\1,&x=1.\end{cases}}}$

Pointwise convergence: Convergence is trivial for ${\displaystyle x=1}$, since ${\displaystyle f_{n}(x)=f(x)=1}$ for all ${\displaystyle n}$. For ${\displaystyle x\in [0,1)}$ and given ${\displaystyle \epsilon >0}$, we can ensure that ${\displaystyle |f_{n}(x)-f(x)|<\epsilon }$ whenever ${\displaystyle n\geq N}$ by choosing ${\displaystyle N=\lceil \log \epsilon /\log x\rceil }$ (here the upper square brackets indicate rounding up, see ceiling function). Hence, ${\displaystyle f_{n}\to f}$ pointwise for all ${\displaystyle x\in [0,1]}$. Note that the choice of ${\displaystyle N}$ depends on the value of ${\displaystyle \epsilon }$ and ${\displaystyle x}$. Moreover, for a fixed choice of ${\displaystyle \epsilon }$, ${\displaystyle N}$ (which cannot be defined to be smaller) grows without bound as ${\displaystyle x}$ approaches 1. These observations preclude the possibility of uniform convergence.

Non-uniformity of convergence: The convergence is not uniform, because given ${\displaystyle \epsilon >0}$, no single choice of ${\displaystyle N}$ can ensure that ${\displaystyle |f_{n}(x)-f(x)|<\epsilon }$ for all ${\displaystyle x\in [0,1]}$, whenever ${\displaystyle n\geq N}$. To see this, we note that regardless of how large ${\displaystyle n}$ becomes, there is always an ${\displaystyle x_{0}\in [0,1)}$ such that ${\displaystyle f_{n}(x_{0})=1/2}$ (or any other positive value less than 1). Thus, if we choose ${\displaystyle \epsilon =1/4}$, we can never find an ${\displaystyle N}$ such that ${\displaystyle |f_{n}(x)-f(x)|<\epsilon }$ for all ${\displaystyle x\in [0,1]}$ and ${\displaystyle n\geq N}$. Explicitly, given any candidate for ${\displaystyle N}$, consider the value of ${\displaystyle f_{N}}$ at ${\displaystyle x_{0}=(1/2)^{1/N}}$. Since ${\displaystyle |f_{N}(x_{0})-f(x_{0})|={\Big |}[(1/2)^{1/N}]^{N}-0{\Big |}=1/2>\epsilon }$, we have found an example of an ${\displaystyle x\in [0,1]}$ that "escaped" our attempt to "confine" each ${\displaystyle f_{n}\ (n\geq N)}$ to within ${\displaystyle \epsilon }$ of ${\displaystyle f}$ for all ${\displaystyle x\in [0,1]}$. In fact, it is easy to see that ${\displaystyle \lim _{n\to \infty }||f_{n}-f||_{\infty }=1}$, contrary to the requirement that ${\displaystyle ||f_{n}-f||_{\infty }\to 0}$ if ${\displaystyle f_{n}\rightrightarrows f}$.

In this example one can easily see that pointwise convergence does not preserve differentiability or continuity. While each function of the sequence is smooth, that is to say that for all n, ${\displaystyle f_{n}\in C^{\infty }([0,1])}$, the limit ${\displaystyle \lim _{n\rightarrow \infty }f_{n}}$ is not even continuous.

### Exponential function

The series expansion of the exponential function can be shown to be uniformly convergent on any bounded subset S of ${\displaystyle \mathbb {C} }$ using the Weierstrass M-test.

Here is the series:

${\displaystyle \sum _{n=0}^{\infty }{\frac {z^{n}}{n!}}.}$

Any bounded subset is a subset of some disc ${\displaystyle D_{R}}$ of radius R, centered on the origin in the complex plane. The Weierstrass M-test requires us to find an upper bound ${\displaystyle M_{n}}$ on the terms of the series, with ${\displaystyle M_{n}}$ independent of the position in the disc:

${\displaystyle \left|{\frac {z^{n}}{n!}}\right|\leq M_{n},\forall z\in D_{R}.}$

To do this, we notice

${\displaystyle \left|{\frac {z^{n}}{n!}}\right|\leq {\frac {\left|z\right|^{n}}{n!}}\leq {\frac {R^{n}}{n!}}}$

and take ${\displaystyle M_{n}={\frac {R^{n}}{n!}}}$.

If ${\displaystyle \sum _{n=0}^{\infty }M_{n}}$ is convergent, then the M-test asserts that the original series is uniformly convergent.

The ratio test can be used here:

${\displaystyle \lim _{n\to \infty }{\frac {M_{n+1}}{M_{n}}}=\lim _{n\to \infty }{\frac {R^{n+1}}{R^{n}}}{\frac {n!}{(n+1)!}}=\lim _{n\to \infty }{\frac {R}{n+1}}=0}$

which means the series over ${\displaystyle M_{n}}$ is convergent. Thus the original series converges uniformly for all ${\displaystyle z\in D_{R}}$, and since ${\displaystyle S\subset D_{R}}$, the series is also uniformly convergent on S.

## Properties

• Every uniformly convergent sequence is locally uniformly convergent.
• Every locally uniformly convergent sequence is compactly convergent.
• For locally compact spaces local uniform convergence and compact convergence coincide.
• A sequence of continuous functions on metric spaces, with the image metric space being complete, is uniformly convergent if and only if it is uniformly Cauchy.
• If ${\displaystyle S}$ is a compact interval (or in general a compact topological space), and ${\displaystyle (f_{n})}$ is a monotone increasing sequence (meaning ${\displaystyle f_{n}(x)\leq f_{n+1}(x)}$ for all n and x) of continuous functions with a pointwise limit ${\displaystyle f}$ which is also continuous, then the convergence is necessarily uniform (Dini's theorem). Uniform convergence is also guaranteed if ${\displaystyle S}$ is a compact interval and ${\displaystyle (f_{n})}$ is an equicontinuous sequence that converges pointwise.

## Applications

### To continuity

Counterexample to a strengthening of the uniform convergence theorem, in which pointwise convergence, rather than uniform convergence, is assumed. The continuous green functions ${\displaystyle \sin ^{n}(x)}$ converge to the non-continuous red function. This can happen only if convergence is not uniform.

If I is a real interval (or indeed any topological space), we can talk about the continuity of the functions fn and f. The following is the more important result about uniform convergence:

Uniform convergence theorem. If (fn) is a sequence of continuous functions all of which are defined on the interval I which converges uniformly towards the function f on an interval I, then f is continuous on I as well.

This theorem is proved by the "ε/3 trick", and is the archetypal example of this trick: to prove a given inequality (ε), one uses the definitions of continuity and uniform convergence to produce 3 inequalities (ε/3), and then combines them via the triangle inequality to produce the desired inequality.

This theorem is important, since pointwise convergence of continuous functions is not enough to guarantee continuity of the limit function as the image illustrates.

More precisely, this theorem states that the uniform limit of uniformly continuous functions is uniformly continuous; for a locally compact space, continuity is equivalent to local uniform continuity, and thus the uniform limit of continuous functions is continuous.

### To differentiability

If ${\displaystyle S}$ is an interval and all the functions ${\displaystyle f_{n}}$ are differentiable and converge to a limit ${\displaystyle f}$, it is often desirable to determine the derivative function ${\displaystyle f'}$ by taking the limit of the sequence ${\displaystyle f'_{n}}$. This is however in general not possible: even if the convergence is uniform, the limit function need not be differentiable (not even if the sequence consists of everywhere-analytic functions, see Weierstrass function), and even if it is differentiable, the derivative of the limit function need not be equal to the limit of the derivatives. Consider for instance ${\displaystyle f_{n}(x)=n^{-1/2}{\sin(nx)}}$ with uniform limit ${\displaystyle f_{n}\rightrightarrows f\equiv 0}$. Clearly, ${\displaystyle f'}$ is also identically zero. However, the derivatives of the sequence of functions are given by ${\displaystyle f'_{n}(x)=n^{1/2}\cos nx}$. It is obvious that ${\displaystyle f'_{n}}$ do not converge to ${\displaystyle f'}$, or even to any function at all. In order to ensure a connection between the limit of a sequence of differentiable functions and the limit of the sequence of derivatives, the uniform convergence of the sequence of derivatives plus the convergence of the sequence of functions at at least one point is required. The precise statement covering this situation is as follows:[4]

If ${\displaystyle (f_{n})}$ is a sequence of differentiable functions on ${\displaystyle [a,b]}$ such that ${\displaystyle \lim _{n\to \infty }f_{n}(x_{0})}$ exists (and is finite) for some ${\displaystyle x_{0}\in [a,b]}$ and the sequence ${\displaystyle (f'_{n})}$ converges uniformly on ${\displaystyle [a,b]}$, then ${\displaystyle f_{n}}$ converges uniformly to a function ${\displaystyle f}$ on ${\displaystyle [a,b]}$, and ${\displaystyle f'(x)=\lim _{n\to \infty }f'_{n}(x)}$ for ${\displaystyle x\in [a,b]}$.

### To integrability

Similarly, one often wants to exchange integrals and limit processes. For the Riemann integral, this can be done if uniform convergence is assumed:

If ${\displaystyle (f_{n})_{n=1}^{\infty }}$ is a sequence of Riemann integrable functions defined on a compact interval I which uniformly converge with limit ${\displaystyle f}$, then ${\displaystyle f}$ is Riemann integrable and its integral can be computed as the limit of the integrals of the ${\displaystyle f_{n}}$:
${\displaystyle \int _{I}f=\lim _{n\to \infty }\int _{I}f_{n}.}$

In fact, for a uniformly convergent family of bounded functions on an interval, the upper and lower Riemann integrals converge to the upper and lower Riemann integrals of the limit function. This follows because, for n sufficiently large, the graph of ${\displaystyle f_{n}}$ is within ε of the graph of f, and so the upper sum and lower sum of ${\displaystyle f_{n}}$ are each within ${\displaystyle \varepsilon |I|}$ of the value of the upper and lower sums of ${\displaystyle f}$, respectively.

Much stronger theorems in this respect, which require not much more than pointwise convergence, can be obtained if one abandons the Riemann integral and uses the Lebesgue integral instead.

### To analyticity

If a sequence of analytic functions converges uniformly in a region S of the complex plane, then the limit is analytic in S. This example demonstrates that complex functions are more well-behaved than real functions, since the uniform limit of analytic functions on a real interval need not even be differentiable (see Weierstrass function).

### To series

We say that ${\displaystyle \textstyle \sum _{n=1}^{\infty }f_{n}}$ converges:

i) pointwise on E if and only if the sequence of partial sums ${\displaystyle s_{n}(x)=\sum _{j=1}^{n}f_{j}(x)}$ converges for every ${\displaystyle x\in E}$.

ii) uniformly on E if and only if sn converges uniformly as ${\displaystyle n\to \infty }$.

iii) absolutely on E if and only if ${\displaystyle \textstyle \sum _{n=1}^{\infty }|f_{n}|}$ converges for every ${\displaystyle x\in E}$.

With this definition comes the following result:

Let x0 be contained in the set E and each fn be continuous at x0. If ${\displaystyle \textstyle f=\sum _{n=1}^{\infty }f_{n}}$ converges uniformly on E then f is continuous at x0 in E. Suppose that ${\displaystyle E=[a,b]}$ and each fn is integrable on E. If ${\displaystyle \textstyle \sum _{n=1}^{\infty }f_{n}}$ converges uniformly on E then f is integrable on E and the series of integrals of fn is equal to integral of the series of fn.

## Almost uniform convergence

If the domain of the functions is a measure space E then the related notion of almost uniform convergence can be defined. We say a sequence of functions ${\displaystyle (f_{n})}$ converges almost uniformly on E if for every ${\displaystyle \delta >0}$ there exists a measurable set ${\displaystyle E_{\delta }}$ with measure less than ${\displaystyle \delta }$ such that the sequence of functions ${\displaystyle (f_{n})}$ converges uniformly on ${\displaystyle E\setminus E_{\delta }}$. In other words, almost uniform convergence means there are sets of arbitrarily small measure for which the sequence of functions converges uniformly on their complement.

Note that almost uniform convergence of a sequence does not mean that the sequence converges uniformly almost everywhere as might be inferred from the name. However, Egorov's theorem does guarantee that on a finite measure space, a sequence of functions that converges almost everywhere also converges almost uniformly on the same set.

Almost uniform convergence implies almost everywhere convergence and convergence in measure.