Jump to content

Talk:Nyquist–Shannon sampling theorem: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Line 103: Line 103:


could someone add [http://people.xiph.org/~xiphmont/demo/neil-young.html this] to the article? basically, it says that most people think (I certainly did, and was surprised to learn otherwise) that sampling is by its very nature inexact (no doubt prompted by pictures where a stair-stepped jagggedy line is overlaid on a smooth sinusoid), but the theorem says (it does, doesn't it?) that the digital signal contains just enough information to faithfully restore the analog signal. [[User:Уга-уга12|Уга-уга12]] ([[User talk:Уга-уга12|talk]]) 19:06, 26 July 2012 (UTC)
could someone add [http://people.xiph.org/~xiphmont/demo/neil-young.html this] to the article? basically, it says that most people think (I certainly did, and was surprised to learn otherwise) that sampling is by its very nature inexact (no doubt prompted by pictures where a stair-stepped jagggedy line is overlaid on a smooth sinusoid), but the theorem says (it does, doesn't it?) that the digital signal contains just enough information to faithfully restore the analog signal. [[User:Уга-уга12|Уга-уга12]] ([[User talk:Уга-уга12|talk]]) 19:06, 26 July 2012 (UTC)
I was going to put this into our [[List of common misconceptions]] article, but it said there that the misconception must be sourced both regarding the subject matter AND the fact that it's a misconception (but how do you prove something is a misconception short of conduncting surveys on your own? Intutively, however, it seems clear that a lot of people think this way about digitalization) [[User:Уга-уга12|Уга-уга12]] ([[User talk:Уга-уга12|talk]]) 19:12, 26 July 2012 (UTC)

Revision as of 19:12, 26 July 2012


Sentence from intro removed

I removed the following sentence from the introductory section. It is not really related to the Nyquist-Shannon theorem and furthermore it is false.

A signal that is bandlimited is constrained in how rapidly it changes in time, and therefore how much detail it can convey in an interval of time.

Using results from Robert M. Young, An Introduction to Nonharmonic Fourier Series, Academic Press, 1980, one can show without much trouble that the following is true:

For every B>0, every f∈L2([a,b]) and every ε>0, there exists a function g∈L2(R) which is band-limited with bandwidth at most B and such that .

So band-limited functions can change extremely rapidly and can convey arbitrary large amounts of detail in a given interval, as long as one doesn't care about what happens outside of the interval. AxelBoldt (talk) 22:56, 15 October 2011 (UTC)[reply]

Your point is taken, and the sentence should probably be removed (if not reworded). However, I think your example might actually weaken your argument. After all, g is not chosen uniformly over all B, a, and b. Moreover, your f is taken from L2, which constrains the behavior of the function substantially. So even though the wording of the phrase you removed was poor, I think there is still a relevant sentiment which could be re-inserted that does not go against your example (perhaps something about the information content of a bandlimited signal being captured entirely (and thus upper bounded) by a discrete set of samples with certain temporal characteristics). —TedPavlic (talk/contrib/@) 05:09, 16 October 2011 (UTC)[reply]
I've never liked that sentence much either, since it has no definite meaning. Even the information rate is not limited to be proportional to B, unless you include noise, so it's not clear what is intended by "how much detail it can convey". Dicklyon (talk) 05:13, 16 October 2011 (UTC)[reply]
  • g is not chosen uniformly over all B, a, and b.
True, g must depend on B, the bandwidth we desire, and on a and b, since that's the time-interval we are looking at. In a sense that is the whole point: if you focus solely on one time interval, any crazy behavior can be prescribed there for a band-limited function, and furthermore you can require the bandwidth to be as small as you want.
  • f is taken from L2, which constrains the behavior of the function substantially
That's correct, but L2[a,b] has a lot of detailed and extremely rapidly changing stuff in it. For example, you could encode all of Wikipedia as a bit string in an L2[0,1] function, where a 1 is encoded as a +∞ singularity and a 0 is a -∞ singularity. Choosing your ε wisely, you will find a band-limited g (with bandwidth as small as you want!) that still captures all the crazyness that is Wikipedia.
AxelBoldt (talk) 18:44, 16 October 2011 (UTC)[reply]

No, the point is that "constrained in how rapidly it changes in time" relates to the size of the function. And indeed, the L2-norm of the derivative of a band-limited function (indeed any derivative) is bounded by the product of (a power of) the bandwidth and the L2-norm of the function itself.

Or the other way around: given such a band-limited approximation for the restriction to an interval, the behavior outside of the interval can and typically will be explosive. And more so with increasing accuracy of the approximation--LutzL (talk) 15:32, 22 November 2011 (UTC)[reply]

Question

Isn't it the case that in practice, due to the possibility of accidentally sampling the ‘nodes’ of a wave, frequencies near the limit will suffer on average an effective linear volume reduction of 2/pi? — Preceding unsigned comment added by 82.139.90.173 (talk) 04:57, 6 March 2012 (UTC)[reply]

In practice, "the limit" is chosen significantly above the highest frequency in the passband of the anti-aliasing filter, to accommodate the filter's skirts. So I think the answer is "no". And I have no clue how you arrived at the 2/π factor. It might help to explain that.
--Bob K (talk) 05:42, 6 March 2012 (UTC)[reply]

It depends on the filters used. If you reconstruct with square pulses instead of sincs (or zero-order hold instead of impulses into a sinc filter), then you get a rolloff at Nyquist that's equal to an amplitude gain of 2/pi, which comes from evaluating the sinc in the frequency domain, since that's the transform of the rect. It's nothing to do with "accidentally sampling the nodes". Dicklyon (talk) 05:50, 6 March 2012 (UTC)[reply]

New section by Ytw1987

New editor User:Ytw1987 has been adding a bunch of stuff on nonuniform sampling and nonuniform DFT here and elsewhere, all sourced to one book by Marvasti. It's probably not bad stuff, but it's big and complicated, not well wikified, badly styled, and smacks of WP:SPA or WP:COI. If someone else has the time to help assess the new material, and advise him on how to make it more suitable, that would be great. Dicklyon (talk) 19:17, 4 July 2012 (UTC)[reply]

The new material is now in Nonuniform sampling, which seems like a more appropriate place for it. It needs work, if anyone if up for it. Dicklyon (talk) 23:51, 5 July 2012 (UTC)[reply]

Good solution. --Bob K (talk) 15:17, 6 July 2012 (UTC)[reply]

Issues with section on Shannon's proof

There are some issues with the proof outlined in the section. It is not clear what is assumed about the function f. The context is the Hilbert space L^2(R) but, a priori, the argument doesn't hold for elements of L^2(R). For one thing, pointwise evaluation doesn't make sense for elements of L^2. Also, the very first equation

assumes Fourier inversion formula holds for f, which again does hold for general elements of L^2. For counter example, take the sinc function; the integral does not converge. This only works if f is assumed to have slightly better decay at infinity, to be in L^1.

This can be cleaned up as follows:

If f in L^2 has Fourier transform lying in the Hilbert subspace then the well-definedness of the Fourier transform implies that f = g almost everywhere for a continuous function g.


The Stone-Weierstrass theorem shows the family is an orthonormal basis for . So their inverse Fourier transforms is an orthonormal basis for L^2 elements of bandwidth limit W.


One then directly computes the Fourier coefficient in the t-domain, obtaining the L^2-series I have a reference somewhere that says the equality in fact holds pointwise but I am not sure how that goes.

From the mathematical point of view, loosely speaking, the theorem holds because one is dealing with a compact set [-W, W] in the frequency domain. This leads to a situation similar to what we have for the circle, whose Pontryagin dual is the discrete set Z. Mct mht (talk) 00:37, 12 July 2012 (UTC)[reply]

If you think it's important to know what assumptions Shannon was making, it would be good to check his papers before just rewriting his proof and calling your proof his, no? Dicklyon (talk) 07:10, 19 July 2012 (UTC)[reply]
I don't want to get into an edit conflict. That section, as is, is not clean at all. Shannon, being an engineer, doesn't state any assumptions in his paper. It doesn't make sense to talk about a "proof" in the absence of a even clear statement. We don't call Fourier's justifications of theorems bearing his name "proofs" either and wouldn't teach those "proofs" to students. It's questionable whether a word by word reading of Shannon's arguments belongs in the article.
Both the statement of the sampling theorem I gave and proof outlined is very standard in the harmonic analysis literature. The article is currently wanting mathematically. Hopefully something will be done about it, while preserving other points of view. Mct mht (talk) 16:48, 19 July 2012 (UTC)[reply]

Signals are in practice continuous functions, so is f. The Fourier integral exists for any compactly supported L2-function F, I don't get the insistence on L1 in this context, even if it is a tradition. The integral on the right hand side gives a continuous function in t. (Again, F has compact support. This is the stated assumption.) -- Shannon was a mathematician, cryptography and cybernetics were still mathematical topics in his time (or Hardy would be an engineer too). The theorem and proof in his article are short sketches of commonly known facts, serving to introduce the concept of orthogonality of signals and "dimension per time" of a band-limited transmission channel. As a sketch his treatment of Fourier theory is exact enough. Please do also note that strict proofs that are drowned in technicalities are not covered by the guidelines of the mathematics project in wikipedia. Short proofs or sketches that illuminate a topic are the exception.--LutzL (talk) 18:29, 19 July 2012 (UTC)[reply]

That any compactly supported L^2-function F also lies in L^1, by Holder's inequality, is the point. If it's merely in L^2, then there is no inversion in the sense of the Fourier inversion formula. On L^2, the (inverse, in this case) Fourier transform is not given by formula, but via a density argument. It is a fact (needed in this case) that on the intersection of L^1 and L^2, this agrees with the usual integral formula on L^1.
Shannon's sketch indeed works with a little care, and it also happens to be pretty standard. That was the intention of the edit. Also, the proof doesn't come close to the "too technical" threshold, in my opinion: argue that the inversion works a la Shannon, identify a natural orthonormal basis using Stone-Weierstrass, go back to time domain, done. Short and sweet. Mct mht (talk) 13:31, 20 July 2012 (UTC)[reply]
There is no inversion involved, the formula is the definition of a bandlimited function as the reverse Fourier transform of a compactly supported function. That the Fourier series is the representation in an orthogonal basis in L2([-W,W]) is a standard fact, there is, in this given context, nothing to "construct" or to refer to "Stone-Weierstrass" (which is a last part in one of the proofs of the completeness of the basis. A nice proof, but one that belongs into the Fourier series article. This is Wikipedia, this is the internet, links exist for a reason). So indeed you are trying to load up the proof or sketch thereof with unnecessary technical "graffiti".--LutzL (talk) 13:43, 20 July 2012 (UTC)[reply]
Of course there is inversion involved. Sure, the inverse Fourier integral is defined, since the spectrum lies in L^1 also. Inversion comes in precisely because one is trying to recover original signal from the spectrum. Also, without knowing you have a orthonormal basis, simply taking it as a definition is pretty pointless; you can't even assume you are not losing any information on the spectrum in the L^2 sense. I am happy to leave the article alone but that is ignorant. Mct mht (talk) 14:56, 20 July 2012 (UTC)[reply]

common misconception surrounding digital audio

could someone add this to the article? basically, it says that most people think (I certainly did, and was surprised to learn otherwise) that sampling is by its very nature inexact (no doubt prompted by pictures where a stair-stepped jagggedy line is overlaid on a smooth sinusoid), but the theorem says (it does, doesn't it?) that the digital signal contains just enough information to faithfully restore the analog signal. Уга-уга12 (talk) 19:06, 26 July 2012 (UTC) I was going to put this into our List of common misconceptions article, but it said there that the misconception must be sourced both regarding the subject matter AND the fact that it's a misconception (but how do you prove something is a misconception short of conduncting surveys on your own? Intutively, however, it seems clear that a lot of people think this way about digitalization) Уга-уга12 (talk) 19:12, 26 July 2012 (UTC)[reply]