Jump to content

Talk:Sampling (signal processing)/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1

Suggested merge from Sampling (information theory)

If you don't agree that the two articles cover the same subject, please comment. -- Ravn 10:48, 23 February 2006 (UTC)

I think that they cover the same topic. —The preceding unsigned comment was added by 82.26.182.120 (talkcontribs) .
I would agree that they cover the same topic, and they should be merged. —The preceding unsigned comment was added by 82.26.186.171 (talkcontribs) .
Should be merged. —The preceding unsigned comment was added by Mumu Tanchistu (talkcontribs) .

Suggested merge from Digital sampling

It seems there is a consensus about merging with sampling (information theory). I propose also merging with digital sampling. All deal with the same topic, though digital sampling is more audio-oriented. If there are no objections, I will merge all three articles some time in the near future. I envision the audio-related stuff as a section in sampling (signal processing), which may some day grow into a separate article audio sampling (which currently is simply a redirect to digital sampling). --Zvika 17:38, 12 May 2006 (UTC)

Digital sampling is a special form of sampling

I don't agree that these two headings meant the same thing. Sampling, as I thought I went to some trouble to explain in the article, produces a set of analog samples. There is nothing inherently digital about the process of sampling. The samples are commonly represented digitally in modern electronics, but they don't have to be, and when they are a whole new set of artifacts is generated which should be analysed separately from basic sampling theory. Pulse width modulation, pulse duration modulation and pulse position modulation are all used as well as pulse amplitude modulation, all of which are non-digital in the sense that they do not use finite step increments. --Lindosland 20:15, 23 May 2006 (UTC)

I agree absolutely. Sampling in itself is not digital. If I understand correctly, the "artifacts" you speak of are a result of quantization (signal processing), a separate stage which, in modern devices, usually follows sampling. It's too bad we didn't have this discussion before I performed the merge, but nevertheless, here are my reasons for merging:
  1. There are already articles on sampling and on quantization, and it is redundant to have an additional article discussing both of them together.
  2. I went through the links to digital sampling. There were around 30, and almost all (maybe 90%) referred to sampling in the sense of sampling (music). These were clearly incorrect links, but they demonstrated the fact that this term is often used to mean something other than what you intended. Thus, the current redirection from digital sampling to the disambig page sampling is much more appropriate. (After the merge, I fixed all links to digital sampling.)
Finally, I'd just like to say that I think your article was good, and I did my best to move all of the material in it either to sampling (signal processing) or to quantization (sound processing), as appropriate. --Zvika 19:49, 25 May 2006 (UTC)

way more than 48khz is needed for digital audio

The article says: "The recent trend towards higher sampling rates, at two or four times this basic requirement, has not been justified theoretically, or shown to make any audible difference, even under the most critical listening conditions."

The guy fails to understand that the heaviest analog filters (chevychev) need at least one entire octave or more to prevent aliasing achieving 70db/80db attenuation which is the minimum aceptable.Not to mention if we want a flatter passband a butterworth is needed, thus needing even higher samplerate. So if we hear 20khz at max, the analog antialiasing filter will have the signal more or less clean from 40khz onwards. For a 40khz nyquist frequency a sample rate of 80khz is needed. 96khz is great sample rate because it allows a good transition for the antialising filter and would make resample from 48khz easy. Although conversion from 44.1khz is not that easy. (Stupid thing consumer music and professional audio have different sample rates and a difficult ratio to deal with)

This argument has been going on for years without conclusive evidence. It really boils down to whether it is easier to just double the sample rate or build extremely steep analog filters, with the ever decreasing cost of low cost storage and high frequency digital electronics winning over the more expensive filters in systems that try to appeal to 'golden ears.' True double blind testing between 48k systems with the best filters and 96k systems does not reveal statistically detectable differences, thus disproving the theory that accurate reproduction above 20kHz is necessary. Charlie Richmond 17:35, 15 November 2006 (UTC)
Nobody uses pure analog filters since the 80's during sampling and reconstruction processes. Steep analog filters are expensive, sub-optimal, unstable, and have lots of phase distortion. These were employed in the first CD players, but were quickly abandoned. From many years ago, digital oversampling filters are employed for these tasks. With them, it's easy to have 90 dB of attenuation over a couple of KHz, and with zero phase distortion. Only a gentle first order lowpass filter is needed to eliminate ultra-high frecuencies remaining from this digital filtering stage.--KikeG (talk) 09:03, 13 April 2009 (UTC)


Ok, Charlie, in the first place "extremely steep analog filters" simply don't exist, an eight order chevychev is already a heavy lowpass filter. Download filterlab at http://www.microchip.com/stellent/idcplg?IdcService=SS_GET_PAGE&nodeId=1406&dDocName=en010007 and then design yourself one of those steep analog filters.And there is no need of "true double blind testing" or any scientific test since this can be solved purely with mathematics. If your analog signal doesn't contain ultrasonics then there is no need of antialiasing filter at all, thus you can sample at 40000hz.If the analog signal has some ultrasonics then a weak filter and 44100/48000 will do. But if the signal is rich in ultrasonics a proper filter is needed and a higher samplerate.(vynil records may exhibit strong ultrasonics, because needle friction, dirt, scratches, etc...). Of course you can record at 96000hz and then resample at 48000hz leaving upto 20khz audio intact because digital filters vastly outperform analog filters.But resampling itself isn't 100% loseless,and these days more and more info can be stored in modern media, so why downsample?.And anyway oversampling to higher samplerate to prevent the DAC hold effect is usually performed. (Gus, 24 jan 2007)

No contest

There's really no point in arguing here. All we have to do is to cite reliable sources on both sides of the argument, summarizing their points. No unsourced statements need to tolerated. Dicklyon 06:06, 25 January 2007 (UTC)

OK Dicklyon, you asked for it, the source is "The Scientist and Engineer's Guide to Digital Signal Processing" by Steven W. Smith.

This book is free and can be downloaded at http://www.dspguide.com Chapter 3 - ADC and DAC:

"Unfortunately, even an 8 pole Chebyshev isn't as good as you would like for an antialias filter. For example, imagine a 12 bit system sampling at 10,000 samples per second. The sampling theorem dictates that any frequency above 5 kHz will be aliased, something you want to avoid. With a little guess work,you decide that all frequencies above 5 kHz must be reduced in amplitude by a factor of 100, insuring that any aliased frequencies will have an amplitude of less than one percent. Looking at Fig. 3-11c, you find that an 8 pole Chebyshev filter, with a cutoff frequency of 1 hertz, doesn't reach an attenuation (signal reduction) of 100 until about 1.35 hertz. Scaling this to the example, the filter's cutoff frequency must be set to 3.7 kHz so that everything above 5 kHz will have the required attenuation. This results in the frequency band between 3.7 kHz and 5 kHz being wasted on the inadequate roll-off of the analog filter. A subtle point: the attenuation factor of 100 in this example is probably sufficient even though there are 4096 steps in 12 bits. From Fig. 3-4, 5100 hertz will alias to 4900 hertz, 6000 hertz will alias to 4000 hertz, etc. You don't care what the amplitudes of the signals between 5000 and 6300 hertz are,because they alias into the unusable region between 3700 hertz and 5000 hertz.In order for a frequency to alias into the filter's passband (0 to 3.7 kHz), it must be greater than 6300 hertz, or 1.7 times the filter's cutoff frequency of 3700 hertz. As shown in Fig. 3-11c, the attenuation provided by an 8 pole Chebyshev filter at 1.7 times the cutoff frequency is about 1300, much more adequate than the 100 we started the analysis with. The moral to this story: In most systems, the frequency band between about 0.4 and 0.5 of the sampling frequency is an unusable wasteland of filter roll-off and aliased signals. This is a direct result of the limitations of analog filters."

Note that even with the trick of intentional aliasing between last usable frequecy and nyquist frequency, he only achieves factor 1300. That's about 62db of 96db dinamic range of 16bit compact disc. 5000/3700=1,351 so 24000 / 1,351=17760. (24000=48000/2) I admit a cutoff frequency of 17760 is not that bad for quality audio,I admit my poor ears won't reconize the difference,but these weren't the goals,it was supossed to leave the frequency response flat up to 19khz or 20khz.62db of noise at 17760hz is probably unhearable too,but all of this and the filter ripple leave the signal somewhat dirty at the limit of aceptability.If each stage of audio leaves the signal like that, at the end it will be heard, Gus, 25jan 2007

Do I care? No. Just summarize the point with a reference in the article. And don't complain when someone also represents the alternate point of view with a reference. Dicklyon 16:06, 25 January 2007 (UTC)
Gus, you seem to be one of those audiophile guys that spends $100 on 24-carat gold plated audio connectors because some geek website told you that it will make it sound better. While it's obvious that you have some knowledge relating to digital audio and audio filtering, your common sense and practicality are lacking. Have you ever tried listening to an 18kHz tone? I'm not even 30 years old yet, and I cannot hear an 18kHz tone unless it is pumped up about 20dB higher than a 1kHz tone. The audio that you are so desperately trying to faithfully reproduce is, for all practical purposes, completely inaudible. That is why a previous comment on this page referenced a double blind study. If the majority of normal people cannot perceive a difference between 48kHz and 96kHz, then what is the point? I'm certainly not going to pay an extra few hundred bucks so that the audio coming out of my speakers is "mathematically correct", despite the fact that I can't perceive the difference.
Even your own explanation points out the futility: you say that your 8th order Chebyshev filter effectively would limit the frequency response to 17760 Hz. You say that aliasing noise would be 62dB down at 17760Hz. SIXTY-TWO DECIBELS!!! That's the limit of acceptability? Not even taking into account the fact that the normal human ear response is already at least 20dB down at 17760Hz (according to ITU-R 468). Have you ever heard of masking? Let's say a CD is playing music that is generally between -10dBFS and 0dBFS (typical for moderately compressed modern music). Are you honestly telling me that a human being can perceive a -62dBFS aliasing noise that is included with that -10 to 0dBFS music? Superman cannot even perceive that. In fact, the aliasing noise could easily be at -30dBFS and you still wouldn't perceive it.
I understand your points and agree that a 96kHz sampling rate would more faithfully reproduce the full audible range of the recorded material than a 48 or 44.1 sampling rate. However, the difference in the perceived increase of quality is so minuscule (if not totally imperceptible), that it is hardly worth the additional cost. And it's definitely not worth listening to you screaming that "way more than 48khz is needed for digital audio," and then trying to dazzle us with long-winded descriptions of complicated filters.

Snottywong 19:51, 27 March 2007 (UTC)

per Wikipedia:Civility#Removing uncivil comments I have made minor edits to Snottywong's comments, toning down the incivility without altering the meaning. Snottywong, I would suggest you edit it to tone it down even more, since this sort of tone seldom helps us get to consensus. Anyway, Gus's comment is ancient and didn't affect anything, so ignoring it might have been wiser. Dicklyon 03:00, 28 March 2007 (UTC)

Merge from Sample (signal)

This stub should be merged here. There is no reason for two separate pages both of which are on the sample disambiguation page. --Selket Talk 19:10, 6 April 2007 (UTC)

OK, I did the trivial merge. Feel free to tune it up. Dicklyon 03:15, 17 April 2007 (UTC)

Sampling rate for bandpass signals

I think some hints about how the relation for was obtained are needed in section IF/RF (bandpass) sampling. Also, the plot entitled "Plot of allowed sample rates (gray areas) versus the upper edge frequency for a band of width W = 1. The darker gray areas correspond to the condition with n = 0 in the equations of this section." does not have a title or comments on its axis. It is difficult to understand what does it represent. —Preceding unsigned comment added by 193.252.48.40 (talk) 21:23, 6 January 2008 (UTC)

I made that plot after the verifying the equations, but I'm not sure where they came from. I just modified the caption to help make it more clear. Here is a source with some simpler looking formulae; maybe we should use those instead. See if they explain it. Dicklyon (talk) 23:30, 6 January 2008 (UTC)
Looks like I moved the math there from Nyquist–Shannon sampling theorem on 21 Aug 2006; it had been added there and modified a lot during Oct/Nov 2005, by LutzL and others. He mentioned in an edit summary that the source was in a link, so I chased it down to here. Too bad he didn't make it a ref, and it got lost; its equations look like those in the book I linked above. In this diff, LutzL morphed it into something like the present form, with the new n as well as the N that stood for the n in the refs. His edit summary said "hopefully simplified the section on undersampling", but I don't think so; it did give us the n that I used to make the dark-gray regions which are the lowest allowable sampling rates, but I think it's more complicated, and we'd be better off going back to something closer to the way the sources do it. Anyone want to work on this? Dicklyon (talk) 01:35, 7 January 2008 (UTC)
The first source you give only cites the inequality and does not show it's origin. I also cannot understand quite clearly the sketches on that page, even as I know what to look for. On the other hand, one should "streamline" this paragraph, so W should be replaced in favor of . Both in the book and in this paragraph the source of the problem is not much highlighted: That for real-valued signals, a frequency intervall [L,H] implies a second part [-H,-L], and that shifts of both intervals by integer multiples of may not overlap. The general theorem that covers this and a lot of other cases is Kluvánek's sampling theorem on LCA groups (by Igor Kluvánek) (LCA=locally compact abelian groups and discrete lattices therein).--LutzL (talk) 12:16, 7 January 2008 (UTC)
Thanks for that info. You're right that the sketches in that book are all screwed up. I'll work up an explanation and simplification based on your suggestions. If you have a ref or copy of something by Igor K. that I can talk about, let me know (e.g a copy of the 5 stories article mentioned on his page). Dicklyon (talk) 16:29, 7 January 2008 (UTC)
You did a very good work on that. Unfortunately I only have a paper copy of Higgins paper... Correction: Found an e-version at project euclid, apparently it is "open access". Some hints to the nature of Kluvanek's theorem are in Poisson summation formula in "Generalizations". Raromir S. Stankovic: SOME HISTORIC REMARKS ON SAMPLING THEOREM[PDF] has the statement, but no nontrivial examples, --LutzL (talk) 12:16, 8 January 2008 (UTC)

PCM

What is the relation with PCM?82.75.140.46 (talk) 23:19, 7 March 2010 (UTC)

Complex sampling

Do not you think it could be valuable to add information regarding the concept of "complex sampling"?, which is widely used for I-Q signals (Inphase and Quadrature). —Preceding unsigned comment added by 62.83.147.212 (talk) 17:08, 28 November 2010 (UTC)

Good point. It should be treated here, or linked to if its treatment is elsewhere.
--Bob K (talk) 21:55, 28 November 2010 (UTC)
Right, as soon as I understand the theory I will try to update the article. However, if someone find any reference or a good explanation it will be interesting to cover this topic.
—Preceding unsigned comment added by 80.25.197.208 (talk) 11:15, 09:05, 29 November 2010
A reasonable place to start learning is Negative_frequency#Complex_sinusoids. But a thorough treatment would go beyond sinusoids.
--Bob K (talk) 15:16, 29 November 2010 (UTC)

A complex signal behaves no differently than two real signals in parallel. Sample each according to the theorem, and you're good to go. Unless you've got special conditions like a band limit from 0 to B instead of -B to B, in which case you can get away with just sampling the real part, or just the imaginary, or perhaps half as many samples of each. Or if it's a one-sided passband, then as this book explains...
Dicklyon (talk) 05:16, 30 November 2010 (UTC)

Dick Lyon, let me ask you how could I have a bandlimit of 0 to B instead of -B to B? I understand a real signal (the signal which flies) is an even function (its module) in the frequency domain, so negative frequencies must exist. Regards. —Preceding unsigned comment added by 62.83.147.212 (talk) 22:41, 1 December 2010 (UTC)


An example might help:
A complex sample-rate of 200/sec (for instance) is sufficient for signals that contain only frequencies in (-100,100) or only frequencies in [0,200). Examples (respectively):    and     Similarly, a complex sample-rate of 100/sec is sufficient for signals that contain only frequencies in (-50,50) or only frequencies in [0,100). So if you know there are no negative frequencies (such as an analytic signal) the minimum (complex) sample rate is B, not 2B. [1]

Alternatively, when there are no negative frequencies, you can discard the imaginary part, which causes the frequency content to expand (symmetrically) to (-B,B), which requires real-valued sampling at rate 2B.

Notes

  1. ^ When sampled at 200/sec, a(t) and b(t) are indistinguishable. Only the prior knowledge that the original signal was contained in either (-100,100) or [0,200) would allow you to reconstruct the original signal unambiguously.

--Bob K (talk) 13:01, 3 December 2010 (UTC)

Digital transform

A "digital transform" is a permutation of a sequence of samples, not part of sampling itself. I removed a single-sentence paragraph about digital transforms, one equating them with sampling. I think an article should be written about digital transforms so that the concept can be made clearer. Binksternet (talk) 02:56, 20 January 2011 (UTC)

I looked, and couldn't find any consistent category of things called "digital transforms" in books. What is it you have in mind? Dicklyon (talk) 05:02, 20 January 2011 (UTC)

Dirac comb?

When I said that multiplying by a Dirac comb didn't help in the context where it had been added, I was reverted and told I was wrong, here. The trouble is that saying multiplication by a Dirac comb doesn't really explain how to the get the sample values any better than the text that was already there. The link to the article also didn't lead to anything about Dirac combs in sampling, just in reconstruction. I do understand that in multiplying by a Dirac comb one makes a signal with a periodic Fourier transform equivalent to the DTFT of the sample sequence, but I don't see otherwise why it helps to introduce it at this point. Comments? Dicklyon (talk) 05:06, 28 October 2011 (UTC)

My view is that the dirac comb has to be mentioned in the article since it's notable in this context and central to sampling theory, and I was surprised when you took it out, and I'm even more surprised that you've raised it on the talk page.Teapeat (talk) 05:17, 28 October 2011 (UTC)
I think that's what the talk page is for. I don't mind it being mentioned, but where you put it raises more questions than it answers. Not sure why you see it as "central" to sampling theory. Did Shannon use it in his theorem, or his proof of it? Not that I recall, but I'd have to review it. Dicklyon (talk) 05:28, 28 October 2011 (UTC)
Here's a book that explains sampling by multiplying by a train of impulse functions. It includes the important step, missing from your description, that "the areas of the impulse functions are equal to the samples". And compared to such explanations in books, there are about an order of magnitude more books that explain sampling without this artifice. And it doesn't explain why they do it this way, or what advantage they get beyond just saying take the values at times nT as the samples. I think you need to convince us there's some value, and construct a meaningful explanation, before we can include it. Dicklyon (talk) 05:39, 28 October 2011 (UTC)
I would ask you to convince us that there's some value from removing it from the theory section given that you've just explained that it's a common and important way to approach the theory.Teapeat (talk) 16:45, 28 October 2011 (UTC)
I have argued that it's neither common nor important, since it doesn't show up in 90% of the sources. But it can be included if done carefully. Dicklyon (talk) 22:03, 28 October 2011 (UTC)
I don't agree with Teapeats changes, but with the intention. It is a bad idea to define the sampling process via a tempered distribution. However, in analyzing the sampling process via Fourier transforms, it can be convenient, but not necessary, to represent the sampling operation via the Dirac comb. If I remember correctly, this is now the main approach in representing the proof of the sampling theorem.--LutzL (talk) 12:39, 28 October 2011 (UTC)
This is a section called 'theory', and we're supposed to be summarising the theory behind it. If it really is the main approach (and it is), then we should be summarising it in the most broad way, mention the comb and then leave the mathematical heavy lifting off in other articles. At the moment the 'theory' section has essentially no theory in it.Teapeat (talk) 16:45, 28 October 2011 (UTC)
Even if it's the main approach to proving the sampling theorem, it is not the main approach to explaining sampling. If we make those things more clear, I'm sure we can find a place for it. Dicklyon (talk) 22:03, 28 October 2011 (UTC)
Frankly, I estimate that the article is about 30% too short and 90% unreferenced. I'm surprised that anybody is taking anything out at this stage.Teapeat (talk) 16:45, 28 October 2011 (UTC)
I'm sort of a deletionist. When an article is in need of improvement, I generally don't believe that more unsourced stuff, badly integrated, is helpful. Dicklyon (talk) 22:03, 28 October 2011 (UTC)
That works well for very good articles, but otherwise that's not really the way Wikipedia works; otherwise articles cannot get off the ground. Sorry, I have a rule about this. The fact that you were able to reference the material you removed to a reliable source gives me good reason to be offended. When people that know better repeatedly remove true material from an article that I've added, I lose trust in the people that do that, and I walk away from that article and don't come back. I don't mind collaborating with people, but I won't collaborate with people that revert me like that.Teapeat (talk) 00:24, 29 October 2011 (UTC)
That's fair. Dicklyon (talk) 03:22, 29 October 2011 (UTC)

not about FFT

I didn't understand the point of this edit and the summary didn't help, as there's nothing there about FFTs. Your new version used Nyquist frequency without defining it, and the condition " band-limited to the Nyquist frequency" is pure jargon. If you'd like a more concise version, we can work on that. Dicklyon (talk) 21:47, 27 December 2011 (UTC)

Sorry, the bit I removed discussed Fourier transform. I think that losing it improves readability. This section is supported by a {{See also}} to Nyquist–Shannon sampling theorem so doesn't need to cover all the gory details. Approaching sampling theory from the frequency domain is arguably not the most accessible route. I agree that "band-limited to the Nyquist frequency" is jargon and don't mind if we spell this out a bit better. In my defense, I think it is an accurate description and any reader confused by the terminology is a click away from definitions of the terms. I have restored my edits because I believe it is an improvement over "A sufficient condition is that the non-zero portion of its Fourier transform, S(f), be contained within a known frequency region of length fs. When that interval is [-fs/2, fs/2], the applicable reconstruction formula is the Whittaker–Shannon interpolation formula." --Kvng (talk) 22:35, 27 December 2011 (UTC)
Yes, the Fourier transform is critical to the concept of perfect reconstuction from sampling; the FFT, on the other hand, is completely irrelevant, as it's just a fast algorithm for evaluating a Discrete Fourier transform, which is in no way helpful here. As for readability, that's served best when you don't introduce and use novel terms without definition. The gory details are really fairly straightforward, and it was kind of nice that they were even correct here. A better solution would probably just be to omit the sentence about reconstruction. Dicklyon (talk) 22:47, 27 December 2011 (UTC)

Article wrong according to hatcravat

https://news.ycombinator.com/item?id=5581806

I don't know where to start, this is not my domain. --Ysangkok (talk) 15:57, 21 April 2013 (UTC)

I would just leave it alone. This article is factual enough, there are no glaring errors that I can see. There are some analog holdover folks that think that any digitization compromises quality. I think that they are as correct as the monster cable advocates. I dunno. 70.109.185.57 (talk) 16:11, 21 April 2013 (UTC)
Are you sure you read the post by hatcravat? I only linked iso8859-1's post so that you'd see the context. --Ysangkok (talk) 18:16, 22 April 2013 (UTC)

I don't see anything in that discussion that even suggests the article is wrong. Where hatcravat says "This is wrong." he is referring to the original complainer. He's right that he's wrong. Dicklyon (talk) 04:23, 23 April 2013 (UTC)

Theory: no Hz. f_s alsready contains the unit

Two recent edit labels, both attempting to justify the same change:

  • Theory: no Hz. f_s already contains the unit (User:Kondephy)
  • Neither T nor f_s are dimensionless numbers. And they *may* be expressed with units other than seconds or Hz

are saying two very different things.

The first one touches on a minor issue that is real, but usually glossed over in the textbooks. However the edit label incorrectly identifies that issue, and the "fix" is inadequate. The second one is of course true, but entirely misses the point.

The issue is that the June 5 version of the article makes these statements:

  • let s(t) be a continuous function (or "signal") to be sampled, and let sampling be performed by measuring the value of the continuous function every T seconds
  • The sampling frequency or sampling rate, fs, is defined as the number of samples obtained in one second (samples per second), thus fs = 1/T.
  • That fidelity is reduced when s(t) contains frequency components higher than fs/2 Hz, which is known as the Nyquist frequency of the sampler.

The problem is that the quantity "1" in "1/T" obviously has units of samples, and the quantity "1/2" in fs/2 has units of cycles/sample. Those statements are what's lacking from the article (as they are from most texts). One remedy is to simply insert them without any reason given, but that's like magic. This article is not a proper place for the whole story, so ideally it would WikiLink to an article that is. And ideally that would be Nyquist frequency, but it suffers from the same deficiency. The closest thing we seem have at the moment is Nyquist–Shannon_sampling_theorem#Aliasing, and this formula in particular:

where the units of and are again in Hz and samples/sec, and so the integer k must have units of cycles/sample. The Nyquist frequency corresponds to k=½, because that is the midpoint between the k=0 image and its first alias.

It seems like too much information for this article, which is why I haven't done it. But in my edit label I invited User:Kondephy to take it on, in case he/she feels strongly about it.

--Bob K (talk) 12:06, 13 June 2014 (UTC)

No, Bob. You made it worse. You seem to think (or you seem to want everyone else to think) that seconds and Hz are the only possible units to express time and frequency in. They're not. fs can be expressed in many other units, like kHz or MHz. Maybe even someday, we'll express it in GHz. But it doesn't matter. fs is not a dimensionless quantity, it is a dimensional physical quantity. Now normally we may want T and f to have reciprocal units (like ms and kHz), but they need not be. You can still have T in ms and f in Hz and their product is still a dimensionless value and it's the same dimensionless number despite the choice of units (as long as the choice of units fall within the same dimension of quantity).
As you have many times before, you made the page worse, but you are more tenacious than I so your confusing and incorrect edit will survive until someone else comes along.
There is so much wrong with nearly every point you make. E.g. cycles/sample doesn't have units. It's dimensionless. Just a number.
And statements like "The problem is that the quantity "1" in "1/T" obviously has units of samples, and the quantity "1/2" in fs/2 has units of cycles/sample" are so asinine that they deserves no other comment.
Have you ever published in the literature? A textbook or a technical paper that was refereed and edited by someone else? Have you ever written a decently mathematical rigorous treatment of something in, say, electrical engineering? No one can tell (but we might guess the answer is no) by your edits here at Wikipedia, and I have seen your edits screw up pages here for better than 6 years.
I'm 58 years old myself, I imagine that you're even older and stuck in your ways, but it's a shame that fallacious notions misunderstood and doggedly held by old engineers whose ways are atrophied and cannot change, that such confuses other people. Bob, you need to clear your own ignorance and misconceptions before you have hope of doing that for others.
Sheesh.
70.109.184.247 (talk) 17:44, 13 June 2014 (UTC)

I'm sorry you feel that way. I don't know where this discussion will go over time, but I don't expect it will be time well spent. So all I will say for now is that your whole premise, which is: "You seem to think (or you seem to want everyone else to think) that seconds and Hz are the only possible units to express time and frequency in." is incorrect. The article chooses those units to illustrate its points. I quote:

For functions that vary with time, let s(t) be a continuous function (or "signal") to be sampled, and let sampling be performed by measuring the value of the continuous function every T seconds,

It is certainly possible to rewrite the article in more generalized terms, but that is not what you did. You kept the definition of T and then just ignored it.
--Bob K (talk) 02:05, 14 June 2014 (UTC)