Poisson summation formula

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation.

Forms of the equation[edit]

For appropriate functions f,\,  the Poisson summation formula may be stated as:

\sum_{n=-\infty}^\infty f(n)=\sum_{k=-\infty}^\infty \hat f\left(k\right),     where \hat f  is the Fourier transform[1] of f\,;  that is   \hat f(\nu) = \mathcal{F}\{f(x)\}.






With the substitution, g(xP)\ \stackrel{\text{def}}{=}\ f(x),\,  and the Fourier transform property,  \mathcal{F}\{g(x P)\}\ = \frac{1}{P} \cdot \hat g\left(\frac{\nu}{P}\right)  (for P > 0),  Eq.1 becomes:

\sum_{n=-\infty}^\infty g(nP)=\frac{1}{P}\sum_{k=-\infty}^\infty \hat g\left(\frac{k}{P}\right)     (Stein & Weiss 1971).






With another definition,  s(t+x)\ \stackrel{\text{def}}{=}\ g(x),\,  and the transform property  \mathcal{F}\{s(t+x)\}\ = \hat s(\nu)\cdot e^{i 2\pi \nu t},  Eq.2 becomes a periodic summation (with period P) and its equivalent Fourier series:

\underbrace{\sum_{n=-\infty}^{\infty} s(t + nP)}_{S_P(t)} = \sum_{k=-\infty}^{\infty} \underbrace{\frac{1}{P}\cdot \hat s\left(\frac{k}{P}\right)}_{S[k]}\ e^{i 2\pi \frac{k}{P} t }     (Pinsky 2002; Zygmund 1968).






Similarly, the periodic summation of a function's Fourier transform has this Fourier series equivalent:

\sum_{k=-\infty}^{\infty} \hat s(\nu + k/T) = \sum_{n=-\infty}^{\infty} T\cdot s(nT)\ e^{-i 2\pi n T \nu} \equiv \mathcal{F}\left \{ \sum_{n=-\infty}^{\infty} T\cdot s(nT)\ \delta(t-nT)\right \},






where T represents the time interval at which a function s(t) is sampled, and 1/T is the rate of samples/sec.

Distributional formulation[edit]

These equations can be interpreted in the language of distributions (Córdoba 1988; Hörmander 1983, §7.2) for a function or distribution, f, whose derivatives are all rapidly decreasing (see Schwartz function). Using the Dirac comb distribution and its Fourier series:

\sum_{n=-\infty}^\infty \delta(x-nP) \equiv \sum_{k=-\infty}^\infty  \frac{1}{P}\cdot e^{-i 2\pi \frac{k}{P} x} \quad\stackrel{\mathcal{F}}{\Longleftrightarrow}\quad \frac{1}{P}\cdot \sum_{k=-\infty}^{\infty} \delta (\nu+k/P),






Eq.1 readily follows:

\sum_{k=-\infty}^\infty \hat f(k)
&= \sum_{k=-\infty}^\infty \left(\int_{-\infty}^{\infty} f(x)\ e^{-i 2\pi k x} dx \right)
= \int_{-\infty}^{\infty} f(x) \underbrace{\left(\sum_{k=-\infty}^\infty e^{-i 2\pi k x}\right)}_{\sum_{n=-\infty}^\infty \delta(x-n)} dx \\
&= \sum_{n=-\infty}^\infty  \left(\int_{-\infty}^{\infty} f(x)\ \delta(x-n)\ dx \right) = \sum_{n=-\infty}^\infty f(n).


\sum_{k=-\infty}^{\infty} \hat s(\nu + k/T) 
&= \sum_{k=-\infty}^{\infty} \mathcal{F}\left \{ s(t)\cdot e^{-i 2\pi\frac{k}{T}t}\right \}\\
&= \mathcal{F} \bigg \{s(t)\underbrace{\sum_{k=-\infty}^{\infty} e^{-i 2\pi\frac{k}{T}t}}_{T \sum_{n=-\infty}^{\infty} \delta(t-nT)}\bigg \}
= \mathcal{F}\left \{\sum_{n=-\infty}^{\infty} T\cdot s(nT) \cdot \delta(t-nT)\right \}\\
&= \sum_{n=-\infty}^{\infty} T\cdot s(nT) \cdot \mathcal{F}\left \{\delta(t-nT)\right \}
= \sum_{n=-\infty}^{\infty} T\cdot s(nT) \cdot e^{-i 2\pi nT \nu}.


We can also prove that Eq.3 holds in the sense that if s(t)  ∈ L1(R), then the right-hand side is the (possibly divergent) Fourier series of the left-hand side. This proof may be found in either (Pinsky 2002) or (Zygmund 1968). It follows from the dominated convergence theorem that sP(t) exists and is finite for almost every t. And furthermore it follows that sP is integrable on the interval [0,P]. The right-hand side of Eq.3 has the form of a Fourier series. So it is sufficient to show that the Fourier series coefficients of sP(t) are \scriptstyle \frac{1}{P} \hat s\left(\frac{k}{P}\right).. Proceeding from the definition of the Fourier coefficients we have:

S[k]\ &\stackrel{\text{def}}{=}\ \frac{1}{P}\int_0^{P} s_P(t)\cdot e^{-i 2\pi \frac{k}{P} t}\, dt\\
&=\ \frac{1}{P}\int_0^{P} 
     \left(\sum_{n=-\infty}^{\infty} s(t + nP)\right)
     \cdot e^{-i 2\pi\frac{k}{P} t}\, dt\\
&=\ \frac{1}{P} 
        \int_0^{P} s(t + nP)\cdot e^{-i 2\pi\frac{k}{P} t}\, dt,
where the interchange of summation with integration is once again justified by dominated convergence. With a change of variables (τ = t + nP) this becomes:

S[k] =
\frac{1}{P} \sum_{n=-\infty}^{\infty} \int_{nP}^{nP + P} s(\tau) \ e^{-i 2\pi \frac{k}{P} \tau} \ \underbrace{e^{i 2\pi k n}}_{1}\,d\tau
\ =\ \frac{1}{P} \int_{-\infty}^{\infty} s(\tau) \ e^{-i 2\pi \frac{k}{P} \tau} d\tau = \frac{1}{P}\cdot \hat s\left(\frac{k}{P}\right)


Eq.3 holds provided s(t) is a continuous integrable function which satisfies

|s(t)| + |\hat{s}(t)| \le C (1+|t|)^{-1-\delta}

for some C, δ > 0 and every t (Grafakos 2004; Stein & Weiss 1971). Note that such s(t) is uniformly continuous, this together with the decay assumption on s, show that the series defining sP converges uniformly to a continuous function.   Eq.3 holds in the strong sense that both sides converge uniformly and absolutely to the same limit (Stein & Weiss 1971).

Eq.3 holds in a pointwise sense under the strictly weaker assumption that s has bounded variation and

2\cdot s(t)=\lim_{\varepsilon\to 0} s(t+\varepsilon) + \lim_{\varepsilon\to 0} s(t-\varepsilon)     (Zygmund 1968).

The Fourier series on the right-hand side of Eq.3 is then understood as a (conditionally convergent) limit of symmetric partial sums.

As shown above, Eq.3 holds under the much less restrictive assumption that s(t) is in L1(R), but then it is necessary to interpret it in the sense that the right-hand side is the (possibly divergent) Fourier series of sP(t) (Zygmund 1968). In this case, one may extend the region where equality holds by considering summability methods such as Cesàro summability. When interpreting convergence in this way Eq.2 holds under the less restrictive conditions that g(x) is integrable and 0 is a point of continuity of gP(x). However Eq.2 may fail to hold even when both g\, and \hat{g} are integrable and continuous, and the sums converge absolutely (Katznelson 1976).


In partial differential equations, the Poisson summation formula provides a rigorous justification for the fundamental solution of the heat equation with absorbing rectangular boundary by the method of images. Here the heat kernel on R2 is known, and that of a rectangle is determined by taking the periodization. The Poisson summation formula similarly provides a connection between Fourier analysis on Euclidean spaces and on the tori of the corresponding dimensions (Grafakos 2004).

In signal processing, the Poisson summation formula leads to the Discrete-time Fourier transform and the Nyquist–Shannon sampling theorem (Pinsky 2002).

Computationally, the Poisson summation formula is useful since a slowly converging summation in real space is guaranteed to be converted into a quickly converging equivalent summation in Fourier space.[citation needed] (A broad function in real space becomes a narrow function in Fourier space and vice versa.) This is the essential idea behind Ewald summation.

The Poisson summation formula may be used to derive Landau's asymptotic formula for the number of lattice points in a large Euclidean sphere. It can also be used to show that if an integrable function, f\, and \hat f both have compact support then f = 0\,  (Pinsky 2002).

Poisson summation can also be used to derive a variety of functional equations including the functional equation for the Riemann zeta function.[2]


The Poisson summation formula holds in Euclidean space of arbitrary dimension. Let Λ be the lattice in Rd consisting of points with integer coordinates; Λ is the character group, or Pontryagin dual, of Rd. For a function ƒ in L1(Rd), consider the series given by summing the translates of ƒ by elements of Λ:

\sum_{\nu\in\Lambda} f(x+\nu).

Theorem For ƒ in L1(Rd), the above series converges pointwise almost everywhere, and thus defines a periodic function Pƒ on Λ. Pƒ lies in L1(Λ) with ||Pƒ||1 ≤ ||ƒ||1. Moreover, for all ν in Λ, Pƒ̂(ν) (Fourier transform on Λ) equals ƒ̂(ν) (Fourier transform on Rd).

When ƒ is in addition continuous, and both ƒ and ƒ^ decay sufficiently fast at infinity, then one can "invert" the domain back to Rd and make a stronger statement. More precisely, if

|f(x)| + |\hat{f}(x)| \le C (1+|x|)^{-d-\delta}

for some C, δ > 0, then

\sum_{\nu\in\Lambda} f(x+\nu) = \sum_{\nu\in\Lambda}\hat{f}(\nu)e^{2\pi i x\cdot\nu},     (Stein & Weiss 1971, VII §2)

where both series converge absolutely and uniformly on Λ. When d = 1 and x = 0, this gives the formula given in the first section above.

More generally, a version of the statement holds if Λ is replaced by a more general lattice in Rd. The dual lattice Λ′ can be defined as a subset of the dual vector space or alternatively by Pontryagin duality. Then the statement is that the sum of delta-functions at each point of Λ, and at each point of Λ′, are again Fourier transforms as distributions, subject to correct normalization.

This is applied in the theory of theta functions, and is a possible method in geometry of numbers. In fact in more recent work on counting lattice points in regions it is routinely used − summing the indicator function of a region D over lattice points is exactly the question, so that the LHS of the summation formula is what is sought and the RHS something that can be attacked by mathematical analysis.

Further generalisation to locally compact abelian groups is required in number theory. In non-commutative harmonic analysis, the idea is taken even further in the Selberg trace formula, but takes on a much deeper character.

See also[edit]


  1. ^ \hat{f}(\nu)\ \stackrel{\mathrm{def}}{=}\int_{-\infty}^{\infty} f(x)\ e^{-2\pi i\nu x}\, dx.
  2. ^ H. M. Edwards (1974). Riemann's Zeta Function. Academic Press. ISBN 0-486-41740-9. (pages 209-211)