# Gibbs phenomenon

In mathematics, the Gibbs phenomenon, discovered by Henry Wilbraham (1848)[1] and rediscovered by J. Willard Gibbs (1899),[2] is the peculiar manner in which the Fourier series of a piecewise continuously differentiable periodic function behaves at a jump discontinuity. The nth partial sum of the Fourier series has large oscillations near the jump, which might increase the maximum of the partial sum above that of the function itself. The overshoot does not die out as the frequency increases, but approaches a finite limit.[3] This sort of behavior was also observed by experimental physicists, but was believed to be due to imperfections in the measuring apparatuses.[4]

These are one cause of ringing artifacts in signal processing.

## Description

Functional approximation of square wave using 5 harmonics
Functional approximation of square wave using 25 harmonics
Functional approximation of square wave using 125 harmonics

The Gibbs phenomenon involves both the fact that Fourier sums overshoot at a jump discontinuity, and that this overshoot does not die out as the frequency increases.

The three pictures on the right demonstrate the phenomenon for a square wave (of height $\pi/4$) whose Fourier expansion is

$\sin(x)+\frac{1}{3}\sin(3x)+\frac{1}{5}\sin(5x)+\dotsb.$

More precisely, this is the function f which equals $\pi/4$ between $2n\pi$ and $(2n+1)\pi$ and $-\pi/4$ between $(2n+1)\pi$ and $(2n+2)\pi$ for every integer n; thus this square wave has a jump discontinuity of height $\pi/2$ at every integer multiple of $\pi$.

As can be seen, as the number of terms rises, the error of the approximation is reduced in width and energy, but converges to a fixed height. A calculation for the square wave (see Zygmund, chap. 8.5., or the computations at the end of this article) gives an explicit formula for the limit of the height of the error. It turns out that the Fourier series exceeds the height $\pi/4$ of the square wave by

$\frac{1}{2}\int_0^\pi \frac{\sin t}{t}\, dt - \frac{\pi}{4} = \frac{\pi}{2}\cdot (0.089490\dots)$

or about 9 percent. More generally, at any jump point of a piecewise continuously differentiable function with a jump of a, the nth partial Fourier series will (for n very large) overshoot this jump by approximately $a \cdot (0.089392\dots)$ at one end and undershoot it by the same amount at the other end; thus the "jump" in the partial Fourier series will also be about 9% larger than the jump in the original function. At the location of the discontinuity itself, the partial Fourier series will converge to the midpoint of the jump (regardless of what the actual value of the original function is at this point). The quantity

$\int_0^\pi \frac{\sin t}{t}\ dt = (1.851937052\dots) = \frac{\pi}{2} + \pi \cdot (0.089392\dots)$

is sometimes known as the Wilbraham–Gibbs constant.

### History

The Gibbs phenomenon was first noticed and analyzed by the obscure Henry Wilbraham.[1] He published a paper on it in 1848 that went unnoticed by the mathematical world.[5] Albert A. Michelson developed a device in 1898 that could compute and re-synthesize the Fourier series. A widespread myth says that when the Fourier coefficients for a square wave were input to the machine, the graph would oscillate at the discontinuities, and that because it was a physical device subject to manufacturing flaws, Michelson was convinced that the overshoot was caused by errors in the machine. In fact the graphs produced by the machine were not good enough to exhibit the Gibbs phenomenon clearly, and Michelson may not have noticed it as he made no mention of this effect in his paper (Michelson & Stratton 1898) about his machine or his later letters to Nature. Inspired by some correspondence in Nature between Michelson and Love about the convergence of the Fourier series of the square wave function, in 1898 J. Willard Gibbs published a short note in which he considered what today would be called a sawtooth wave and pointed out the important distinction between the limit of the graphs of the partial sums of the Fourier series, and the graph of the function that is the limit of those partial sums. In his first letter Gibbs failed to notice the Gibbs phenomenon, and the limit that he described for the graphs of the partial sums was inaccurate. In 1899 he published a correction in which he described the overshoot at the point of discontinuity (Nature: April 27, 1899, p. 606). In 1906, Maxime Bôcher gave a detailed mathematical analysis of that overshoot, which he called the "Gibbs phenomenon".[6]

### Explanation

Informally, it reflects the difficulty inherent in approximating a discontinuous function by a finite series of continuous sine and cosine waves. It is important to put emphasis on the word finite because even though every partial sum of the Fourier series overshoots the function it is approximating, the limit of the partial sums does not. The value of x where the maximum overshoot is achieved moves closer and closer to the discontinuity as the number of terms summed increases so, again informally, once the overshoot has passed by a particular x, convergence at the value of x is possible.

There is no contradiction in the overshoot converging to a non-zero amount, but the limit of the partial sums having no overshoot, because where that overshoot happens moves. We have pointwise convergence, but not uniform convergence. For a piecewise C1 function the Fourier series converges to the function at every point except at the jump discontinuities. At the jump discontinuities themselves the limit will converge to the average of the values of the function on either side of the jump. This is a consequence of the Dirichlet theorem.[7]

The Gibbs phenomenon is also closely related to the principle that the decay of the Fourier coefficients of a function at infinity is controlled by the smoothness of that function; very smooth functions will have very rapidly decaying Fourier coefficients (resulting in the rapid convergence of the Fourier series), whereas discontinuous functions will have very slowly decaying Fourier coefficients (causing the Fourier series to converge very slowly). Note for instance that the Fourier coefficients 1, −1/3, 1/5, ... of the discontinuous square wave described above decay only as fast as the harmonic series, which is not absolutely convergent; indeed, the above Fourier series turns out to be only conditionally convergent for almost every value of x. This provides a partial explanation of the Gibbs phenomenon, since Fourier series with absolutely convergent Fourier coefficients would be uniformly convergent by the Weierstrass M-test and would thus be unable to exhibit the above oscillatory behavior. By the same token, it is impossible for a discontinuous function to have absolutely convergent Fourier coefficients, since the function would thus be the uniform limit of continuous functions and therefore be continuous, a contradiction. See more about absolute convergence of Fourier series.

### Solutions

In practice, the difficulties associated with the Gibbs phenomenon can be ameliorated by using a smoother method of Fourier series summation, such as Fejér summation or Riesz summation, or by using sigma-approximation. Using a wavelet transform with Haar basis functions, the Gibbs phenomenon does not occur in the case of continuous data at jump discontinuities,[8] and is minimal in the discrete case at large change points. In wavelet analysis, this is commonly referred to as the Longo phenomenon.

## Formal mathematical description of the phenomenon

Let $f: {\Bbb R} \to {\Bbb R}$ be a piecewise continuously differentiable function which is periodic with some period $L > 0$. Suppose that at some point $x_0$, the left limit $f(x_0^-)$ and right limit $f(x_0^+)$ of the function $f$ differ by a non-zero gap $a$:

$f(x_0^+) - f(x_0^-) = a \neq 0.$

For each positive integer N ≥ 1, let SN f be the Nth partial Fourier series

$S_N f(x) := \sum_{-N \leq n \leq N} \hat f(n) e^{\frac{2i\pi n x}{L}} = \frac{1}{2} a_0 + \sum_{n=1}^N \left( a_n \cos\left(\frac{2\pi nx}{L}\right) + b_n \sin\left(\frac{2\pi nx}{L}\right) \right),$

where the Fourier coefficients $\hat f(n), a_n, b_n$ are given by the usual formulae

$\hat f(n) := \frac{1}{L} \int_0^L f(x) e^{-2i\pi n x/L}\, dx$
$a_n := \frac{2}{L} \int_0^L f(x) \cos\left(\frac{2\pi nx}{L}\right)\, dx$
$b_n := \frac{2}{L} \int_0^L f(x) \sin\left(\frac{2\pi nx}{L}\right)\, dx.$

Then we have

$\lim_{N \to \infty} S_N f\left(x_0 + \frac{L}{2N}\right) = f(x_0^+) + a\cdot (0.089392\dots)$

and

$\lim_{N \to \infty} S_N f\left(x_0 - \frac{L}{2N}\right) = f(x_0^-) - a\cdot (0.089392\dots)$

but

$\lim_{N \to \infty} S_N f(x_0) = \frac{f(x_0^-) + f(x_0^+)}{2}.$

More generally, if $x_N$ is any sequence of real numbers which converges to $x_0$ as $N \to \infty$, and if the gap a is positive then

$\limsup_{N \to \infty} S_N f(x_N) \leq f(x_0^+) + a\cdot (0.089392\dots)$

and

$\liminf_{N \to \infty} S_N f(x_N) \geq f(x_0^-) - a\cdot (0.089392\dots).$

If instead the gap a is negative, one needs to interchange limit superior with limit inferior, and also interchange the ≤ and ≥ signs, in the above two inequalities.

## Signal processing explanation

For more details on this topic, see Ringing artifacts.
The sinc function, the impulse response of an ideal low-pass filter. Scaling narrows the function, and correspondingly increases magnitude (which is not shown here), but does not reduce the magnitude of the undershoot, which is the integral of the tail.

From the point of view of signal processing, the Gibbs phenomenon is the step response of a low-pass filter, and the oscillations are called ringing or ringing artifacts. Truncating the Fourier transform of a signal on the real line, or the Fourier series of a periodic signal (equivalently, a signal on the circle) corresponds to filtering out the higher frequencies by an ideal (brick-wall) low-pass/high-cut filter. This can be represented as convolution of the original signal with the impulse response of the filter (also known as the kernel), which is the sinc function. Thus the Gibbs phenomenon can be seen as the result of convolving a Heaviside step function (if periodicity is not required) or a square wave (if periodic) with a sinc function: the oscillations in the sinc function cause the ripples in the output.

The sine integral, exhibiting the Gibbs phenomenon for a step function on the real line.

In the case of convolving with a Heaviside step function, the resulting function is exactly the integral of the sinc function, the sine integral; for a square wave the description is not as simply stated. For the step function, the magnitude of the undershoot is thus exactly the integral of the (left) tail, integrating to the first negative zero: for the normalized sinc of unit sampling period, this is $\int_{-\infty}^{-1} \frac{\sin(\pi x)}{\pi x}\,dx.$ The overshoot is accordingly of the same magnitude: the integral of the right tail, or, which amounts to the same thing, the difference between the integral from negative infinity to the first positive zero, minus 1 (the non-overshooting value).

The overshoot and undershoot can be understood thus: kernels are generally normalized to have integral 1, so they result in a mapping of constant functions to constant functions – otherwise they have gain. The value of a convolution at a point is a linear combination of the input signal, with coefficients (weights) the values of the kernel. If a kernel is non-negative, such as for a Gaussian kernel, then the value of the filtered signal will be a convex combination of the input values (the coefficients (the kernel) integrate to 1, and are non-negative), and will thus fall between the minimum and maximum of the input signal – it will not undershoot or overshoot. If, on the other hand, the kernel assumes negative values, such as the sinc function, then the value of the filtered signal will instead be an affine combination of the input values, and may fall outside of the minimum and maximum of the input signal, resulting in undershoot and overshoot, as in the Gibbs phenomenon.

Taking a longer expansion – cutting at a higher frequency – corresponds in the frequency domain to widening the brick-wall, which in the time domain corresponds to narrowing the sinc function and increasing its height by the same factor, leaving the integrals between corresponding points unchanged. This is a general feature of the Fourier transform: widening in one domain corresponds to narrowing and increasing height in the other. This results in the oscillations in sinc being narrower and taller and, in the filtered function (after convolution), yields oscillations that are narrower and thus have less area, but does not reduce the magnitude: cutting off at any finite frequency results in a sinc function, however narrow, with the same tail integrals. This explains the persistence of the overshoot and undershoot.

Thus the features of the Gibbs phenomenon are interpreted as follows:

• the undershoot is due to the impulse response having a negative tail integral, which is possible because the function takes negative values;
• the overshoot offsets this, by symmetry (the overall integral does not change under filtering);
• the persistence of the oscillations is because increasing the cutoff narrows the impulse response, but does not reduce its integral – the oscillations thus move towards the discontinuity, but do not decrease in magnitude.

## The square wave example

Animation of the additive synthesis of a square wave with an increasing number of harmonics. The Gibbs phenomenon is visible especially when the number of harmonics is large.

We now illustrate the above Gibbs phenomenon in the case of the square wave described earlier. In this case the period L is $2\pi$, the discontinuity $x_0$ is at zero, and the jump a is equal to $\pi/2$. For simplicity let us just deal with the case when N is even (the case of odd N is very similar). Then we have

$S_N f(x) = \sin(x) + \frac{1}{3} \sin(3x) + \cdots + \frac{1}{N-1} \sin((N-1)x).$

Substituting $x=0$, we obtain

$S_N f(0) = 0 = \frac{-\frac{\pi}{4} + \frac{\pi}{4}}{2} = \frac{f(0^-) + f(0^+)}{2}$

as claimed above. Next, we compute

$S_N f\left(\frac{2\pi}{2N}\right) = \sin\left(\frac{\pi}{N}\right) + \frac{1}{3} \sin\left(\frac{3\pi}{N}\right) + \cdots + \frac{1}{N-1} \sin\left( \frac{(N-1)\pi}{N} \right).$

If we introduce the normalized sinc function, $\operatorname{sinc}(x)\,$, we can rewrite this as

$S_N f\left(\frac{2\pi}{2N}\right) = \frac{\pi}{2} \left[ \frac{2}{N} \operatorname{sinc}\left(\frac{1}{N}\right) + \frac{2}{N} \operatorname{sinc}\left(\frac{3}{N}\right) + \cdots + \frac{2}{N} \operatorname{sinc}\left( \frac{(N-1)}{N} \right) \right].$

But the expression in square brackets is a numerical integration approximation to the integral $\int_0^1 \operatorname{sinc}(x)\ dx$ (more precisely, it is a midpoint rule approximation with spacing $2/N$). Since the sinc function is continuous, this approximation converges to the actual integral as $N \to \infty$. Thus we have

\begin{align} \lim_{N \to \infty} S_N f\left(\frac{2\pi}{2N}\right) & = \frac{\pi}{2} \int_0^1 \operatorname{sinc}(x)\, dx \\[8pt] & = \frac{1}{2} \int_{x=0}^1 \frac{\sin(\pi x)}{\pi x}\, d(\pi x) \\[8pt] & = \frac{1}{2} \int_0^\pi \frac{\sin(t)}{t}\ dt \quad = \quad \frac{\pi}{4} + \frac{\pi}{2} \cdot (0.089490\dots), \end{align}

which was what was claimed in the previous section. A similar computation shows

$\lim_{N \to \infty} S_N f\left(-\frac{2\pi}{2N}\right) = -\frac{\pi}{2} \int_0^1 \operatorname{sinc}(x)\ dx = -\frac{\pi}{4} - \frac{\pi}{2} \cdot (0.089490\dots).$

## Consequences

In signal processing, the Gibbs phenomenon is undesirable because it causes artifacts, namely clipping from the overshoot and undershoot, and ringing artifacts from the oscillations. In the case of low-pass filtering, these can be reduced or eliminated by using different low-pass filters.

In MRI, the Gibbs phenomenon causes artifacts in the presence of adjacent regions of markedly differing signal intensity. This is most commonly encountered in spinal MR imaging, where the Gibbs phenomenon may simulate the appearance of syringomyelia.