Talk:Poisson summation formula

From Wikipedia, the free encyclopedia
Jump to: navigation, search

How to prove a summation formula?[edit]

As indicated in the application, PSF can help to prove


\sum_{n=1}^{\infty} \frac{1}{n^{2}} = \frac{\pi^{2}}{6}

Anyone knows details of it? I tried myself, but could not figure it out. Malaybear 11:08, 23 August 2007 (UTC)

Hi Malaybear, and welcome to Wikipedia!
I'm not sure how to prove it myself, but let's think it through together! :) Willow 11:19, 23 August 2007 (UTC)

Trial proof of summation formula[edit]

Here are the basic definitions from the article. The summation equals


S(t) \ \stackrel{\mathrm{def}}{=}\   
\sum_{n=-\infty}^{\infty} f(t + n T) = 
\frac{1}{T} \sum_{m=-\infty}^{\infty} \tilde{F}(m \omega_0) \ e^{i m \omega_0 t}

where the continuous Fourier transform is defined as

 \tilde{F}(\omega) \ \stackrel{\mathrm{def}}{=}\   \int_{-\infty}^{\infty} f(t) \ e^{-i\omega t} dt

and the fundamental frequency \omega_0 is

 \omega_0 \ \stackrel{\mathrm{def}}{=}\   \frac{2\pi}{T}.

This suggests that we try setting


S_{0} = \sum_{n=-\infty}^{\infty} \frac{1}{b^{2} + n^{2}} = 
\frac{1}{b^{2}} + 2\sum_{n=1}^{\infty} \frac{1}{b^{2} + n^{2}}

and set b=0 at the end. We can take S0 as a limit


S_{0} = \lim_{t \rightarrow 0} S(t) = \lim_{t \rightarrow 0} \sum_{n=-\infty}^{\infty} \frac{1}{b^{2} + \left(t + nT \right)^{2}}

where the period T=1 and ω0=2π. Therefore we have


f(t) = \frac{1}{b^{2} + t^{2}}

which we may write as


f(t) =  
\frac{1}{\left( t + ib\right)\left( t - ib\right)} =
\left( \frac{1}{2ib} \right) \left[ \frac{1}{t - ib} - \frac{1}{t + ib}\right]

Using the residue theorem, the Fourier transform of this function is something like


\tilde{F}(\omega) = \left(\frac{\pi}{b} \right) e^{-b \left| \omega \right|}

I'm not totally sure if this is right, but let's try it and see what happens; you can always put the correct constant in later. Plugging into the Poisson summation formula gives


S_{0} = \lim_{t \rightarrow 0} \frac{1}{T} \sum_{m=-\infty}^{\infty} \tilde{F}(m \omega_0) \ e^{i m \omega_0 t} = \sum_{m=-\infty}^{\infty} \tilde{F}(m \omega_0)

which is a geometric series


S_{0} = \tilde{F}(0) + 2 \sum_{m=1}^{\infty} \tilde{F}(m \omega_0) = 
\left(\frac{\pi}{b} \right) \left[ 1 + 2 \sum_{m=1}^{\infty} e^{-b 2 \pi m} \right]

This equals


S_{0} = 
\left(\frac{\pi}{b} \right) \left[ -1 + 2 \sum_{m=0}^{\infty} e^{-b 2 \pi m} \right] =
\left(\frac{\pi}{b} \right) \left[ -1 + \frac{2}{1 - e^{-2\pi b}} \right]

Willow 13:20, 23 August 2007 (UTC)

Define T0 as the last term, and expand it for small b

T_{0}=
\frac{2}{1 - e^{-2\pi b}} = 
\frac{2}{1-(1-2\pi b+2(\pi b)^2-\frac{4}{3}(\pi b)^3) + \cdots}
which can be written as

T_{0} = 
\frac{1}{\pi b} \left[ 1+\pi b-\frac{2}{3}(\pi b)^2+(\pi b)^2 + \cdots \right]
=
\frac{1}{\pi b} \left[ 1+\pi b+\frac{1}{3}(\pi b)^2 + \cdots \right] = 
\frac{1}{\pi b} + 1 + \frac{\pi b}{3} + \cdots
Submitting this into original equation

S_{0} = 
\frac{\pi}{b}\left[ -1 +  \frac{1}{\pi b} + 1 + \frac{\pi b}{3} + \cdots \right]
=\frac{1}{b^2}+\frac{(\pi)^2}{3} + \cdots
Comparing to the previous equation

S_{0} = \sum_{n=-\infty}^{\infty} \frac{1}{b^{2} + n^{2}} = 
\frac{1}{b^{2}} + 2\sum_{n=1}^{\infty} \frac{1}{b^{2} + n^{2}}
and taking the limit b goes to zero finishes the proof.

2\sum_{n=1}^{\infty} \frac{1}{b^{2} + n^{2}} = \frac{(\pi)^2}{3} + \cdots
—The preceding unsigned comment was added by Malaybear (talkcontribs) 14:22, August 23, 2007 (UTC).

Brilliant, Malaybear — well done! The power of working together clears up both of our confusions! :) Willow 16:47, 23 August 2007 (UTC)

Example[edit]

Does this help anyone understand the PSF?:

  • \sum_{k \in \mathbb Z} \frac{\gamma\left(\frac {1-s} 2, (k+ \nu_0)^2 \pi x \right)}{\left( (k+\nu_0)^2 \pi \right)^\frac {1-s} 2} e^{2 \pi i (k+\nu_0) t_0}=
\sum_{k \in \mathbb Z} 
\frac {\Gamma\left(\frac s 2, \frac{(k+ t_0)^2 \pi} x \right)} {\left( (k+t_0)^2 \pi \right)^{\frac s 2}}  e^{-2 \pi i k \nu_0} .

It does nothing for me.

--Bob K (talk) 19:46, 6 May 2009 (UTC)

over-generalization[edit]

User:A. Pichler would like to introduce this as a generalized statement of the PSF:

\sum_{n \in \mathbb Z} f\left((n-\tau_0) T\right)e^{-2 \pi i \nu_0 (n-\tau_0)}= 
\frac{1}{T} \sum_{\nu \in \mathbb Z} \hat f\left(\frac{\nu+\nu_0}{T} \right) e^{-2 \pi i \nu \tau_0}

because it's "useful".

That would be the same as introducing this useful formula as a generalized statement of the Fourier transform:

\int_{-\infty}^{\infty}\left[s(t) e^{i 2\pi f_o t}\right] e^{-i 2\pi f t}\, dt = S(f-f_o)\,

--Bob K (talk) 12:43, 7 May 2009 (UTC)

This is not a different form of the identity, but rather a simple application of the transformation law for the Fourier transform under translation:
\mathcal{F}(\tau_h f)(\xi) = e^{2\pi i h\cdot\xi} (\mathcal{F}f)(\xi)
as you say. Sławomir Biały (talk) 16:06, 15 May 2009 (UTC)

References[edit]

I had taken a bit of a wikibreak, and this article has been heavily edited since. That is great. But back when I was contributing to this article heavily, I made some attempt to add references, and I notice that many of the references I put in have drifted to be attached to other statements. Has anyone been keeping up with the fact checking to make sure we are still pointing people to the correct texts? I will do so tomorrow afternoon my time if no one else has gotten around to it. On a stylistic note, I have some preference for harvard citations, unfortunately back in this edit [1] the article was moved to a mix of footnote citations and harvard citations. Thankfully someone made the article at least self consistent. Are there any objections to me making the article consistent in the other direction? Thenub314 (talk) 21:56, 30 June 2009 (UTC)

A way to hedge against the kind of drift you are worried about is to be as specific as possible in the references. I generally like to cite a Theorem number if possible, or at least a section number. That also makes it clear which statement (or which part of the statement) the reference is intended to support. This is just advice: I don't universally practice what I preach in this regard. Best, Sławomir Biały (talk) 22:13, 3 July 2009 (UTC)

Examples[edit]

None of the following examples seem especially helpful to me. The first two and last have absolutely no context, and don't even seem worth attempting to understand. The "Translation" property is quite obvious (see above thread), and doesn't add much to the article when it is just dumped in somewhere. Sławomir Biały (talk) 03:12, 4 July 2009 (UTC)

  • \begin{align}\sum_{k\in \mathbb Z} e^{-k^2 \pi x} \frac{\left( k^2 \pi x \right)^s}{s!} &= \frac{1}{\sqrt x} \sum_{k\in \mathbb Z} e^{-\frac{k^2 \pi}{x}} L_s^{\left(-\frac{1}{2}\right)} \left(\frac{k^2 \pi}{x} \right)\\&= -\frac{\tan(s \pi)}{\sqrt x} \sum_{k\in \mathbb Z} \, L_{-s-\frac 1 2}^{\left(-\frac 1 2\right)} \left(-\frac{k^2 \pi}{x}\right),\end{align} or
  • \sum_{k\in \mathbb Z} e^{-k^2 \pi x} L_s^{(\alpha)} ( k^2 \pi x) = \frac{(-1)^s}{\sqrt x} \sum_{k\in \mathbb Z} e^{-\frac{k^2 \pi}{x}} L_s^{\left(-s-\alpha-\frac{1}{2}\right)} \left(\frac{k^2 \pi}{x} \right),

where L is the Laguerre function (which reduces to a polynomial of degree s in case s is an integer);

  • Translation: \sum_{n \in \mathbb Z} f\left((n-\tau_0) T\right)e^{-2 \pi i \nu_0 (n-\tau_0)}= 
\frac{1}{T} \sum_{\nu \in \mathbb Z} \hat f\left(\frac{\nu+\nu_0}{T} \right) e^{-2 \pi i \nu \tau_0};
  • \sum_{k \in \mathbb Z} \frac{\gamma\left(\frac {1-s} 2, (k+ \nu_0)^2 \pi x \right)}{\left( (k+\nu_0)^2 \pi \right)^\frac {1-s} 2} \, e^{-2 \pi i k \tau_0}=
\sum_{k \in \mathbb Z} 
\frac {\Gamma\left(\frac s 2, \frac{(k- \tau_0)^2 \pi} x \right)} {\left( (k- \tau_0)^2 \pi \right)^{\frac s 2}} \, e^{-2 \pi i (k- \tau_0) \nu_0},

where \Gamma and \gamma denote the incomplete gamma function. The series notably converge and coincide, although the function's respective Fourier transform exist - because of the singularity - only on formal basis.

Going from Fourier series to discrete Fourier transform[edit]

The Poisson summation formula tells me how I come from the continuous Fourier transform to a Fourier series, namely by periodic summation in time. Now I can also get from the Fourier series to the discrete Fourier transform by periodic summation of the Fourier series: If I sum to a period of n Fourier coefficients then this corresponds to a discrete Fourier transform of the periodic function sampled at n equally spaced points. What is the name of this relation? HenningThielemann (talk) 19:14, 16 October 2010 (UTC)

You might want to take this to the USENET newsgroup comp.dsp. I would say that the DFT fundamentally is a bijective mapping of a discrete and periodic sequence of infinite length and period N, to another of the same period. You "get there" by sampling a continuous periodic function that is bandlimited. Being periodic, it is described by a Fourier Series, and being bandlimited, only a finite set of coefficients are non-zero and contain information. You sample it N equally-spaced samples per period, and N samples suffice to describe it in one domain when N coefficients suffice to describe it in the other domain. Now why this works is because of the sampling theorem which is another way to look at the Poisson summation formula. I would say that the Poisson summation formula is the most fundamental mathematical justification for the sampling theorem. 71.169.191.235 (talk) 22:22, 16 October 2010 (UTC)
Answering myself: Richard Hamming in "Digital filters" proves this relation in section 10.3 "Relation between discrete and continuous expansion" (my translation from German back to English) but does not give it a name. HenningThielemann (talk) 16:29, 1 November 2010 (UTC)

Simplification of the summation formula[edit]

The most simple for the Poisson summation formula seems to me:

\sum_{n=-\infty}^\infty f(n)=\sum_{k=-\infty}^\infty \hat f(k) .

This is equation 1 with t=0,\ T=1, or equation 2 with T=1. It looks more symmetric and thus nicer to me. Equation 1 and 2 can be trivially derived from this form by applying the laws that translating a function modulates its spectrum, and shrinking a function in time direction means stretching the spectrum in frequency direction. HenningThielemann (talk) 16:34, 1 November 2010 (UTC)

Interesting point. But I would disagree with reducing the general form to just the special case, because most people who need the equation will need the general form, and the derivation won't be obvious to all of them, whereas anyone who prefers the special case can easily derive it (as you have just described).
--Bob K (talk) 05:15, 2 November 2010 (UTC)
What a difference a day makes. What I now think is that we should start out just as you suggest and prove the simpler version of the theorem. Then generalize it using the properties of the Fourier transform.
--Bob K (talk) 17:32, 3 November 2010 (UTC)
Yes, that's what I wanted to propose. HenningThielemann (talk) 10:04, 9 November 2010 (UTC)
Well, something valuable (like a proof) was lost with that edit. Why would you guys destroy that value that this article previously had? Now it just relies on faith in someone else's authority that the Fourier transform of a Dirac comb is another Dirac comb. Before Poisson summation formula was proven and now it isn't.
A shame. 71.169.179.128 (talk) 02:14, 27 July 2012 (UTC)
Let's at least talk about the right thing. The Dirac comb "proof" is just a footnote. What the article currently states is that all that really needs proof is:

\sum_{n=-\infty}^\infty f(n)=\sum_{k=-\infty}^\infty \hat f\left(k\right),

 

 

 

 

(Eq.1)

and the rest follows from two well known Fourier transform properties, which do not need to be re-proven. The objection to the 19-Oct-2010 article is that the proof is unnecessarily general, therefore unnecessarily complex. --Bob K (talk) 14:47, 29 July 2012 (UTC)

Distributional formulation[edit]

What does "applying Eq.4 to ƒ" mean? How does one do that?

--Bob K (talk) 21:20, 4 November 2010 (UTC)

A distribution is a function not on the position space, but on the space of test functions over that same space. As such, it has to be linear, and bounded in a special sense. Applying a distribution to a test function means to evaluate the distribution at the test function. For instance one has that, as definition,

\langle \delta_a,f\rangle=\langle \delta(x-a),f(x)\rangle=f(a)
\langle T',f\rangle=-\langle T,f'\rangle for any distribution T
\langle \hat T,f\rangle=\langle T,\hat f(-x)\rangle for any tempered distribution T and Schwarz test function f

etc. so that the evaluation looks like the normal "euklidean" scalar product, especially if the distribution happens to be a test function itself. Then


  \langle \Delta_T,f\rangle
 =\sum_{k\in\Z}\langle\delta_{kT},f\rangle
 =\sum_{k\in\Z}\langle\delta(x-kT),f(x)\rangle
 =\sum_{k\in\Z}f(kT)

  \langle \widehat{\Delta_{1/T}},f\rangle
 =\sum_{k\in\Z}\langle\delta_{k/T},\hat f(-x)\rangle
 =\sum_{k\in\Z}\hat f(-k/T) =\sum_{k\in\Z}\hat f(k/T)

In mathematics, the integral formulation of the evaluation of a distribution is frowned upon. It is only used if the distribution really is a normal function.--LutzL (talk) 10:05, 5 November 2010 (UTC)


Thanks. Apparently the key is
\langle \hat T,f\rangle=\langle T,\hat f(-x)\rangle for any tempered distribution T and Schwarz test function f but I don't have the background to internalize it. Anyhow, would it make sense to add this to the article?
--Bob K (talk) 12:19, 5 November 2010 (UTC)
YesNo, it (the derivation part) *should* be in the unreadable section on 'distributions and the Fourier transform" in the distribution (mathematics) article. Note that they do use a version without reflection. But since \Z is invariant under reflection, either convention works. One can derive my convention by treating the pairing as a hermitian scalar product and writing out the integrals in \textstyle\langle \hat f,g\rangle=\int_\R \hat f(x)\overline{g(x)}dx. The other one results from treating the pairing as a bilinear form without complex conjugation---which some would say is more reasonable for a pairing, and anyway, complex valued distributions are rarely used (outside FT)---writing out the integrals in \textstyle\langle \hat f,g\rangle=\int_\R \hat f(x)g(x)dx and in both cases applying Fubini's theorem on the change of integration order.--LutzL (talk) 17:59, 5 November 2010 (UTC)


I think the wording "applying to" is a little awkward. Maybe "pairing with a test function" is better? Sławomir Biały (talk) 14:28, 5 November 2010 (UTC)

Recent edits[edit]

Recent edits have (further) obscured the conditions under which the Poisson summation formula holds, by burying these conditions in a lengthy "derivations" section. I suggest that this content should be reorganized into a separate section of its own. Sławomir Biały (talk) 18:40, 21 September 2012 (UTC)

Oops... sorry. I was hoping for a better review. I was trying out an idea from another discussion (Talk:Nyquist-Shannon_sampling_theorem#Narrow_POV): "I think you should cover it all here; start the article off as simply as possible, and then gradually wind the complexity (and mathematics) level up towards the end.TooComplicated (talk) 14:47, 17 September 2012 (UTC)" to see how it might work here.--Bob K (talk) 19:08, 21 September 2012 (UTC)
It's no big deal, but the article current lacks a clear statement of a theorem useful for mathematicians. I propose that a new section break be added to the derivations section after the derivation, and possibly that a clear theorem be added there. Sławomir Biały (talk) 21:49, 23 September 2012 (UTC)