# Talk:Poisson summation formula

## How to prove a summation formula?

As indicated in the application, PSF can help to prove

$\sum_{n=1}^{\infty} \frac{1}{n^{2}} = \frac{\pi^{2}}{6}$

Anyone knows details of it? I tried myself, but could not figure it out. Malaybear 11:08, 23 August 2007 (UTC)

Hi Malaybear, and welcome to Wikipedia!
I'm not sure how to prove it myself, but let's think it through together! :) Willow 11:19, 23 August 2007 (UTC)

## Trial proof of summation formula

Here are the basic definitions from the article. The summation equals

$S(t) \ \stackrel{\mathrm{def}}{=}\ \sum_{n=-\infty}^{\infty} f(t + n T) = \frac{1}{T} \sum_{m=-\infty}^{\infty} \tilde{F}(m \omega_0) \ e^{i m \omega_0 t}$

where the continuous Fourier transform is defined as

$\tilde{F}(\omega) \ \stackrel{\mathrm{def}}{=}\ \int_{-\infty}^{\infty} f(t) \ e^{-i\omega t} dt$

and the fundamental frequency $\omega_0$ is

$\omega_0 \ \stackrel{\mathrm{def}}{=}\ \frac{2\pi}{T}.$

This suggests that we try setting

$S_{0} = \sum_{n=-\infty}^{\infty} \frac{1}{b^{2} + n^{2}} = \frac{1}{b^{2}} + 2\sum_{n=1}^{\infty} \frac{1}{b^{2} + n^{2}}$

and set b=0 at the end. We can take S0 as a limit

$S_{0} = \lim_{t \rightarrow 0} S(t) = \lim_{t \rightarrow 0} \sum_{n=-\infty}^{\infty} \frac{1}{b^{2} + \left(t + nT \right)^{2}}$

where the period T=1 and ω0=2π. Therefore we have

$f(t) = \frac{1}{b^{2} + t^{2}}$

which we may write as

$f(t) = \frac{1}{\left( t + ib\right)\left( t - ib\right)} = \left( \frac{1}{2ib} \right) \left[ \frac{1}{t - ib} - \frac{1}{t + ib}\right]$

Using the residue theorem, the Fourier transform of this function is something like

$\tilde{F}(\omega) = \left(\frac{\pi}{b} \right) e^{-b \left| \omega \right|}$

I'm not totally sure if this is right, but let's try it and see what happens; you can always put the correct constant in later. Plugging into the Poisson summation formula gives

$S_{0} = \lim_{t \rightarrow 0} \frac{1}{T} \sum_{m=-\infty}^{\infty} \tilde{F}(m \omega_0) \ e^{i m \omega_0 t} = \sum_{m=-\infty}^{\infty} \tilde{F}(m \omega_0)$

which is a geometric series

$S_{0} = \tilde{F}(0) + 2 \sum_{m=1}^{\infty} \tilde{F}(m \omega_0) = \left(\frac{\pi}{b} \right) \left[ 1 + 2 \sum_{m=1}^{\infty} e^{-b 2 \pi m} \right]$

This equals

$S_{0} = \left(\frac{\pi}{b} \right) \left[ -1 + 2 \sum_{m=0}^{\infty} e^{-b 2 \pi m} \right] = \left(\frac{\pi}{b} \right) \left[ -1 + \frac{2}{1 - e^{-2\pi b}} \right]$

Willow 13:20, 23 August 2007 (UTC)

Define T0 as the last term, and expand it for small b
$T_{0}= \frac{2}{1 - e^{-2\pi b}} = \frac{2}{1-(1-2\pi b+2(\pi b)^2-\frac{4}{3}(\pi b)^3) + \cdots}$
which can be written as
$T_{0} = \frac{1}{\pi b} \left[ 1+\pi b-\frac{2}{3}(\pi b)^2+(\pi b)^2 + \cdots \right] = \frac{1}{\pi b} \left[ 1+\pi b+\frac{1}{3}(\pi b)^2 + \cdots \right] = \frac{1}{\pi b} + 1 + \frac{\pi b}{3} + \cdots$
Submitting this into original equation
$S_{0} = \frac{\pi}{b}\left[ -1 + \frac{1}{\pi b} + 1 + \frac{\pi b}{3} + \cdots \right] =\frac{1}{b^2}+\frac{(\pi)^2}{3} + \cdots$
Comparing to the previous equation
$S_{0} = \sum_{n=-\infty}^{\infty} \frac{1}{b^{2} + n^{2}} = \frac{1}{b^{2}} + 2\sum_{n=1}^{\infty} \frac{1}{b^{2} + n^{2}}$
and taking the limit b goes to zero finishes the proof.
$2\sum_{n=1}^{\infty} \frac{1}{b^{2} + n^{2}} = \frac{(\pi)^2}{3} + \cdots$
—The preceding unsigned comment was added by Malaybear (talkcontribs) 14:22, August 23, 2007 (UTC).

Brilliant, Malaybear — well done! The power of working together clears up both of our confusions! :) Willow 16:47, 23 August 2007 (UTC)

## Example

Does this help anyone understand the PSF?:

• $\sum_{k \in \mathbb Z} \frac{\gamma\left(\frac {1-s} 2, (k+ \nu_0)^2 \pi x \right)}{\left( (k+\nu_0)^2 \pi \right)^\frac {1-s} 2} e^{2 \pi i (k+\nu_0) t_0}= \sum_{k \in \mathbb Z} \frac {\Gamma\left(\frac s 2, \frac{(k+ t_0)^2 \pi} x \right)} {\left( (k+t_0)^2 \pi \right)^{\frac s 2}} e^{-2 \pi i k \nu_0} .$

It does nothing for me.

--Bob K (talk) 19:46, 6 May 2009 (UTC)

## over-generalization

User:A. Pichler would like to introduce this as a generalized statement of the PSF:

$\sum_{n \in \mathbb Z} f\left((n-\tau_0) T\right)e^{-2 \pi i \nu_0 (n-\tau_0)}= \frac{1}{T} \sum_{\nu \in \mathbb Z} \hat f\left(\frac{\nu+\nu_0}{T} \right) e^{-2 \pi i \nu \tau_0}$

because it's "useful".

That would be the same as introducing this useful formula as a generalized statement of the Fourier transform:

$\int_{-\infty}^{\infty}\left[s(t) e^{i 2\pi f_o t}\right] e^{-i 2\pi f t}\, dt = S(f-f_o)\,$

--Bob K (talk) 12:43, 7 May 2009 (UTC)

This is not a different form of the identity, but rather a simple application of the transformation law for the Fourier transform under translation:
$\mathcal{F}(\tau_h f)(\xi) = e^{2\pi i h\cdot\xi} (\mathcal{F}f)(\xi)$
as you say. Sławomir Biały (talk) 16:06, 15 May 2009 (UTC)

## References

I had taken a bit of a wikibreak, and this article has been heavily edited since. That is great. But back when I was contributing to this article heavily, I made some attempt to add references, and I notice that many of the references I put in have drifted to be attached to other statements. Has anyone been keeping up with the fact checking to make sure we are still pointing people to the correct texts? I will do so tomorrow afternoon my time if no one else has gotten around to it. On a stylistic note, I have some preference for harvard citations, unfortunately back in this edit [1] the article was moved to a mix of footnote citations and harvard citations. Thankfully someone made the article at least self consistent. Are there any objections to me making the article consistent in the other direction? Thenub314 (talk) 21:56, 30 June 2009 (UTC)

A way to hedge against the kind of drift you are worried about is to be as specific as possible in the references. I generally like to cite a Theorem number if possible, or at least a section number. That also makes it clear which statement (or which part of the statement) the reference is intended to support. This is just advice: I don't universally practice what I preach in this regard. Best, Sławomir Biały (talk) 22:13, 3 July 2009 (UTC)

## Examples

None of the following examples seem especially helpful to me. The first two and last have absolutely no context, and don't even seem worth attempting to understand. The "Translation" property is quite obvious (see above thread), and doesn't add much to the article when it is just dumped in somewhere. Sławomir Biały (talk) 03:12, 4 July 2009 (UTC)

• \begin{align}\sum_{k\in \mathbb Z} e^{-k^2 \pi x} \frac{\left( k^2 \pi x \right)^s}{s!} &= \frac{1}{\sqrt x} \sum_{k\in \mathbb Z} e^{-\frac{k^2 \pi}{x}} L_s^{\left(-\frac{1}{2}\right)} \left(\frac{k^2 \pi}{x} \right)\\&= -\frac{\tan(s \pi)}{\sqrt x} \sum_{k\in \mathbb Z} \, L_{-s-\frac 1 2}^{\left(-\frac 1 2\right)} \left(-\frac{k^2 \pi}{x}\right),\end{align} or
• $\sum_{k\in \mathbb Z} e^{-k^2 \pi x} L_s^{(\alpha)} ( k^2 \pi x) = \frac{(-1)^s}{\sqrt x} \sum_{k\in \mathbb Z} e^{-\frac{k^2 \pi}{x}} L_s^{\left(-s-\alpha-\frac{1}{2}\right)} \left(\frac{k^2 \pi}{x} \right),$

where $L$ is the Laguerre function (which reduces to a polynomial of degree s in case s is an integer);

• Translation: $\sum_{n \in \mathbb Z} f\left((n-\tau_0) T\right)e^{-2 \pi i \nu_0 (n-\tau_0)}= \frac{1}{T} \sum_{\nu \in \mathbb Z} \hat f\left(\frac{\nu+\nu_0}{T} \right) e^{-2 \pi i \nu \tau_0};$
• $\sum_{k \in \mathbb Z} \frac{\gamma\left(\frac {1-s} 2, (k+ \nu_0)^2 \pi x \right)}{\left( (k+\nu_0)^2 \pi \right)^\frac {1-s} 2} \, e^{-2 \pi i k \tau_0}= \sum_{k \in \mathbb Z} \frac {\Gamma\left(\frac s 2, \frac{(k- \tau_0)^2 \pi} x \right)} {\left( (k- \tau_0)^2 \pi \right)^{\frac s 2}} \, e^{-2 \pi i (k- \tau_0) \nu_0},$

where $\Gamma$ and $\gamma$ denote the incomplete gamma function. The series notably converge and coincide, although the function's respective Fourier transform exist - because of the singularity - only on formal basis.

## Going from Fourier series to discrete Fourier transform

The Poisson summation formula tells me how I come from the continuous Fourier transform to a Fourier series, namely by periodic summation in time. Now I can also get from the Fourier series to the discrete Fourier transform by periodic summation of the Fourier series: If I sum to a period of n Fourier coefficients then this corresponds to a discrete Fourier transform of the periodic function sampled at n equally spaced points. What is the name of this relation? HenningThielemann (talk) 19:14, 16 October 2010 (UTC)

You might want to take this to the USENET newsgroup comp.dsp. I would say that the DFT fundamentally is a bijective mapping of a discrete and periodic sequence of infinite length and period N, to another of the same period. You "get there" by sampling a continuous periodic function that is bandlimited. Being periodic, it is described by a Fourier Series, and being bandlimited, only a finite set of coefficients are non-zero and contain information. You sample it N equally-spaced samples per period, and N samples suffice to describe it in one domain when N coefficients suffice to describe it in the other domain. Now why this works is because of the sampling theorem which is another way to look at the Poisson summation formula. I would say that the Poisson summation formula is the most fundamental mathematical justification for the sampling theorem. 71.169.191.235 (talk) 22:22, 16 October 2010 (UTC)
Answering myself: Richard Hamming in "Digital filters" proves this relation in section 10.3 "Relation between discrete and continuous expansion" (my translation from German back to English) but does not give it a name. HenningThielemann (talk) 16:29, 1 November 2010 (UTC)

## Simplification of the summation formula

The most simple for the Poisson summation formula seems to me:

$\sum_{n=-\infty}^\infty f(n)=\sum_{k=-\infty}^\infty \hat f(k) .$

This is equation 1 with $t=0,\ T=1$, or equation 2 with $T=1$. It looks more symmetric and thus nicer to me. Equation 1 and 2 can be trivially derived from this form by applying the laws that translating a function modulates its spectrum, and shrinking a function in time direction means stretching the spectrum in frequency direction. HenningThielemann (talk) 16:34, 1 November 2010 (UTC)

Interesting point. But I would disagree with reducing the general form to just the special case, because most people who need the equation will need the general form, and the derivation won't be obvious to all of them, whereas anyone who prefers the special case can easily derive it (as you have just described).
--Bob K (talk) 05:15, 2 November 2010 (UTC)
What a difference a day makes. What I now think is that we should start out just as you suggest and prove the simpler version of the theorem. Then generalize it using the properties of the Fourier transform.
--Bob K (talk) 17:32, 3 November 2010 (UTC)
Yes, that's what I wanted to propose. HenningThielemann (talk) 10:04, 9 November 2010 (UTC)
Well, something valuable (like a proof) was lost with that edit. Why would you guys destroy that value that this article previously had? Now it just relies on faith in someone else's authority that the Fourier transform of a Dirac comb is another Dirac comb. Before Poisson summation formula was proven and now it isn't.
A shame. 71.169.179.128 (talk) 02:14, 27 July 2012 (UTC)
Let's at least talk about the right thing. The Dirac comb "proof" is just a footnote. What the article currently states is that all that really needs proof is:

$\sum_{n=-\infty}^\infty f(n)=\sum_{k=-\infty}^\infty \hat f\left(k\right),$

(Eq.1)

and the rest follows from two well known Fourier transform properties, which do not need to be re-proven. The objection to the 19-Oct-2010 article is that the proof is unnecessarily general, therefore unnecessarily complex. --Bob K (talk) 14:47, 29 July 2012 (UTC)

## Distributional formulation

What does "applying Eq.4 to ƒ" mean? How does one do that?

--Bob K (talk) 21:20, 4 November 2010 (UTC)

A distribution is a function not on the position space, but on the space of test functions over that same space. As such, it has to be linear, and bounded in a special sense. Applying a distribution to a test function means to evaluate the distribution at the test function. For instance one has that, as definition,

$\langle \delta_a,f\rangle=\langle \delta(x-a),f(x)\rangle=f(a)$
$\langle T',f\rangle=-\langle T,f'\rangle$ for any distribution T
$\langle \hat T,f\rangle=\langle T,\hat f(-x)\rangle$ for any tempered distribution T and Schwarz test function f

etc. so that the evaluation looks like the normal "euklidean" scalar product, especially if the distribution happens to be a test function itself. Then

$\langle \Delta_T,f\rangle =\sum_{k\in\Z}\langle\delta_{kT},f\rangle =\sum_{k\in\Z}\langle\delta(x-kT),f(x)\rangle =\sum_{k\in\Z}f(kT)$
$\langle \widehat{\Delta_{1/T}},f\rangle =\sum_{k\in\Z}\langle\delta_{k/T},\hat f(-x)\rangle =\sum_{k\in\Z}\hat f(-k/T) =\sum_{k\in\Z}\hat f(k/T)$

In mathematics, the integral formulation of the evaluation of a distribution is frowned upon. It is only used if the distribution really is a normal function.--LutzL (talk) 10:05, 5 November 2010 (UTC)

Thanks. Apparently the key is
$\langle \hat T,f\rangle=\langle T,\hat f(-x)\rangle$ for any tempered distribution T and Schwarz test function f but I don't have the background to internalize it. Anyhow, would it make sense to add this to the article?
--Bob K (talk) 12:19, 5 November 2010 (UTC)
YesNo, it (the derivation part) *should* be in the unreadable section on 'distributions and the Fourier transform" in the distribution (mathematics) article. Note that they do use a version without reflection. But since $\Z$ is invariant under reflection, either convention works. One can derive my convention by treating the pairing as a hermitian scalar product and writing out the integrals in $\textstyle\langle \hat f,g\rangle=\int_\R \hat f(x)\overline{g(x)}dx$. The other one results from treating the pairing as a bilinear form without complex conjugation---which some would say is more reasonable for a pairing, and anyway, complex valued distributions are rarely used (outside FT)---writing out the integrals in $\textstyle\langle \hat f,g\rangle=\int_\R \hat f(x)g(x)dx$ and in both cases applying Fubini's theorem on the change of integration order.--LutzL (talk) 17:59, 5 November 2010 (UTC)

I think the wording "applying to" is a little awkward. Maybe "pairing with a test function" is better? Sławomir Biały (talk) 14:28, 5 November 2010 (UTC)

## Recent edits

Recent edits have (further) obscured the conditions under which the Poisson summation formula holds, by burying these conditions in a lengthy "derivations" section. I suggest that this content should be reorganized into a separate section of its own. Sławomir Biały (talk) 18:40, 21 September 2012 (UTC)

Oops... sorry. I was hoping for a better review. I was trying out an idea from another discussion (Talk:Nyquist-Shannon_sampling_theorem#Narrow_POV): "I think you should cover it all here; start the article off as simply as possible, and then gradually wind the complexity (and mathematics) level up towards the end.TooComplicated (talk) 14:47, 17 September 2012 (UTC)" to see how it might work here.--Bob K (talk) 19:08, 21 September 2012 (UTC)
It's no big deal, but the article current lacks a clear statement of a theorem useful for mathematicians. I propose that a new section break be added to the derivations section after the derivation, and possibly that a clear theorem be added there. Sławomir Biały (talk) 21:49, 23 September 2012 (UTC)

## Merge from Fourier transform

Some recent, very nice, content was added to Fourier transform. I think some of this should be incorporated here, in favor of a somewhat shorter section at Fourier transform. Sławomir Biały (talk) 20:19, 12 December 2014 (UTC)

doubtless, but it should not actually be removed from the article on the F.T. since people have been asking for more applications to be indicated

in that article. The previous section was practically a stub. — Preceding unsigned comment added by 200.55.128.88 (talk) 20:35, 12 December 2014 (UTC) ok cat enviar/PoissonSummation.txt The Poisson summation formula says that for an $L^1$ function $f$,

$\sum_n \hat f(n) = \sum_n f (n).$

This formula is true provided that $\hat f$ is also $L^1$ and that either sum converges to a continuous function.

Although it appears, at first sight, to be merely a curiosity, it has proven to be very useful and important in both engineering applications and number theory.

By using the functional properties of the Fourier transform under translation and scaling, the apparently more general formula $\sum_n \hat f( a + n\lambda) = {\frac 1\lambda } \sum_n f ({ n\over\lambda } )e^{- 2\pi i {n \over \lambda } a }$

follows immediately. In fact, using the behaviour of the Fourier transform of the translation of any function by $x_o$, one obtains the seemingly even more general

$\sum_n \hat f( a + n\lambda) e^ { 2 \pi i (x_o + {n\over \lambda } } = {\frac 1\lambda } \sum_n f ( x_o + { n\over\lambda } )e^{- 2\pi i {n \over \lambda } a }.$

Poisson's summation formula is sometimes reformulated in terms of a distribution which can be constructed from the Dirac delta function. Write $\delta _y$ for the distribution which sends any test function $f$ to its value at $y$. Then $\sum_n \delta_n$ is sometimes called the Dirac Comb. The Poisson summation formula is equivalent to saying that the Fourier transform of the Dirac Comb is another Dirac comb. (The normalisation constants used in this article arrange that its Fourier transform is also $\sum_n \delta_n$.)

## SIGNAL PROCESSING=

A special case of the Poisson summation formula is, in the engineering literature, called the Nyquist Sampling Theorem. A function $f$ is said to be "band-limited" if no frequencies outside a finite interval contribute to its Fourier transform, i.e., there exists a cut-off frequency $f_o$ such that $\hat f(\xi) = 0$ unless $f_o < \xi < f_o$. When the second version of the Poisson summation formula is applied to a band-limited function, there is only one term on the left-hand side if we set $\lambda = f_o$:

$\hat f (\xi) = {1\over f_o } \sum_n f ({ n\over f_o } )e^{- 2\pi i {n \over f_o } \xi }$

which is an explicit formula for $\hat f$ that only depends on the values of $f$ at a discrete (but infinite) set of points, all of which are multiples of $1\over f_o$. In the statistical study of time-series, if $f$ is a function of time, then looking only at its values at equally spaced points of time is called "sampling."

This theorem has the practical application that if we know the cut-off frequency for the Fourier transform of $f$, we can choose a sampling rate $1\over f_o$ that will guarantee that no information is lost: since $\hat f$ can be reconstructed from these sampled values, then, by Fourier inversion, so can $f$.

If $f$ is approximately band-limited, in the sense that there is some high-frequency cut-off beyond which $\hat f$ can be safely neglected, then this theorem allows us to choose a sampling rate that will not lose very much information.

In the case where $f$ is the result of some measurements, it is often the case that our measuring apparatus is approximately band-limited in the sense that it does not reproduce well any frequencies in $f$ higher than a certain cut-off. This approximate cut-off produced by the measurement apparatus can be found without studying the real process $f$, and so our measurements will be approximately band-limited, and this theorem will allow us to choose a sampling rate that will not lose very much more information, even if we know nothing about the real process being studied.

## NUMBER THEORY=

Poisson summation, as it is also called, is used in number theory to prove the transformation properties of theta functions and of automorphic forms, and hence the functional equations of zeta functions. (Pioneers in this include Fourier, Poisson, Jacobi, and Riemann.)

The theta function first studied by Fourier can be written

$1 + 2\sum_{n\ge 1} \cos(nx) e^{-n^2t}.$


When convolved with any function $f(x)$, it gives a solution of the heat equation taking on, when $t=0$, the values of $f$. It is also called the heat kernel.

In number theory, a slight variant of this function is studied for applications to modular forms: by putting $q= e^{i\pi \tau }$, for $\tau$ a complex number in the upper half plane, and $u$ a real (or complex) variable, we can define Jacobi's theta function:

$\Theta (u, \tau) = \sum_n q^{n^2} e^{2 \pi i n u}$ (from now on, summations are over all integers).

To illustrate the use of Poisson summation in number theory, we will consider the simpler function, of one variable, obtained by putting $u=0$. The relation between $\Theta (0, {-1\over\tau})$ and $\Theta (0, \tau)$ turns out to be important for number theory, since this kind of relation is one of the defining properties of a modular form. By choosing $f= e^{-\pi x^2}$ in the second version of the Poisson summation formula (with $a = 0$), and using the fact that $\hat f = e^{-\pi \xi ^2}$, one gets immediately

$\Theta (0, {-1\over\tau}) = \sqrt{ \tau \over i} \Theta (0, \tau)$

by putting ${1 \over \lambda} = \sqrt{ \tau \over i}$.

It follows from this that $\Theta^8$ has a simple transformation property under $\tau \mapsto {-1 \over \tau}$ and Jacobi was able to use this to find a formula for the number of different ways to express an integer as the sum of eight perfect squares. Riemann used the transformation of $\Theta(0,\tau)$ to prove the functional equation for the Riemann zeta function, and Hecke generalised this to the zeta function associated to a modular form.

A series of mathematicians applying harmonic analysis to number theory, most notably Martin Eichler, Atle Selberg, Robert Langlands, and James Arthur, have generalised the Poisson summation formula to the Fourier transform on non-commutative locally compact reductive algebraic groups $G$ with a discrete subgroup $\Gamma$ such that $G/\Gamma$ has finite volume. For example, $G$ can be the real points of $GL_n$ and $\Gamma$ can be the integral points of $GL_n$. In this setting, $G$ plays the role of the real number line in the classical version of Poisson summation, and $\Gamma$ plays the role of the integers $n$ that appear in the sum. The generalised version of Poisson summation is called the Selberg Trace Formula, can calculate the dimensions of the space of modular forms, and has played a role in proving many cases of Artin's conjecture and in Wiles's proof of Fermat's Last Theorem. The left-hand side of (1) becomes a sum over irreducible unitary representations of $G$, and is called "the spectral side," while the right-hand side becomes a sum over conjugacy classes of $\Gamma$, and is called "the geometric side." The Poisson summation formula is no longer considered a mere curiosity.