From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Electronics (Rated B-class, Mid-importance)
WikiProject icon This article is part of WikiProject Electronics, an attempt to provide a standard approach to writing articles about electronics on Wikipedia. If you would like to participate, you can choose to edit the article attached to this page, or visit the project page, where you can join the project and see a list of open tasks. Leave messages at the project talk page
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
WikiProject Computing (Rated B-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
B-Class article B  This article has been rated as B-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.

This article is missing any information on my 1991 discovery that I call Super-Nyquist which, using Dan's Aliasing Rules allows you to find the location of an alias for any kind of waveform, sinusoid or complex wave, and an additional discovery by myself and David Reynolds that allows you to determine the true frequency of any aliasing wave using coherent sampling and documented in an article in Evaluation engineering. [1] I also have an article I wrote that fully describes aliasing at [2] and a video that fully describes the technique at YouTube [3] I don't know what the policy is on adding new discoveries to Wikipedia, but I hate to put in all the work of editing this entry and then have it all deleted (as has happened to me before) for not citing peer reviewed articles. Just let it be known that this article is wrong and needs to be rewritten to take my discovery into account. Riverdweller (talk) 15:14, 24 November 2014 (UTC)

The policy is Wikipedia:No_original_research.
And leaving minutia out of an encyclopedic article does not make the article "wrong". It's done on purpose.
--Bob K (talk) 06:10, 25 November 2014 (UTC)

Layman's terms?[edit]

I'm not entirely certain what wiki policy is on this, but I've got to assume that any encyclopaedia article should be at least partially understandable by laymen, especially if it's about a topic that directly effects the general public. The reason I ask is that I've seen so many video game reviews (and now DVD review of South Park) which complain about aliasing, and I wish I knew what they were talking about. You'd think that video game reviewers would better explain how aliasing causes problems, but right now, I honestly don't know what the aliasing problems are in my games. - Darkhawk

Good point. A visual illustration of video image aliasing would be useful. For example, the brick patterns at Moiré pattern. Maybe we should incorporate that or something like it. Dicklyon 20:29, 30 August 2006 (UTC)
We keep commenting that we need an illustration (preferably animated), but it sounds as if the only time anyone put one in here, it was removed as confusing. Does anyone know where we could post a request and someone technically skilled enough to produce the thing might see it? I came to this article for the same reason as Darkhawk did, but am equally in the dark after reading. Lawikitejana 04:02, 15 October 2006 (UTC)
To answer Darkhawk and Lawikitejana, I have added a simple lede that describes in a layman's terms what aliasing is. I would like to move the brick wall picture to the front of the article because it is very good and it is probably the most important image in this article. Loisel 04:43, 1 March 2007 (UTC)
I find your new graphic and paragraph to be entirely unsuitable. The picture is just plain ugly, and the text is not at all encyclopedic. Surely you can do better. There's no need to start with an opinion about what a reader may be familiar with or what is more likely. Stick to the facts. Dicklyon 05:25, 1 March 2007 (UTC)
Please lose the cartoon. Aliasing is something more subtle and much more interesting than that. It is an interaction of two different frequencies, one a characteristic of the observer and the other a characteristic of the observed. The result is the illusion of a different frequency than is actually present. It is caused by loss of resolution, but it is a special case. Defocussing my SLR camera lens causes a blur that looks like a lot of other blurs, but I would never refer to that as "aliasing".
--Bob K 12:04, 1 March 2007 (UTC)
Well I don't know about the graphic, but I've rephrased the "familiar" passage. Loisel 05:36, 1 March 2007 (UTC)
The cartoon seems to illustrate a combination of lowpass filtering and sampling, in which the signals after filtering are substantially identical and therefore there is probably not much if any aliasing. It certainly doesn't illustrate the concept well. I'm going to pull it (and the text about it) for now. Dicklyon 16:32, 1 March 2007 (UTC)

Ok. You guys don't actually understand what the article says. The word aliasing describes two different but related phenomena. The first is when two different signals are mapped to the same sampled signals (signal->sample is not injective). That is what the smileys were about. The second is about when the signal->sample->reconstruction gives surprising results. That is what the moire patterns in the brick wall are.
Loisel 22:17, 1 March 2007 (UTC)

That is an unnecessarily divisive viewpoint. Your "second" phenomenon is more than just aliasing. It is aliasing and filtering (or whatever you want to call the reconstruction process). The aliasing happened when you sampled the signal, regardless of whether or not you subsequently reconstruct anything. The sampled signal is indistinguishable from the samples of many other signals. Your "first" phenomenon appears to require that at least one of those other signals also be sampled, so we can verify that the samples are indistinguishable. I do not think that distinction is important or helpful. I have also looked around the internet, and I do not find that distinction.
I am sorry if you feel "your" article has been hijacked. I know how that feels, and it's one of the warnings that Wikipedia gives to all contributors. Hopefully, I have convinced you that a single unified view of aliasing is preferrable to the divided one. But I doubt it, based on past experiences here with other contributors. If you still care, and want to make your point, I suggest you write a new article. If it stands up to the editors, a disambiguation page might be the right answer.
--Bob K 23:42, 1 March 2007 (UTC)
I think both views are already represented in this article. That lead already says that aliasing is when several different signals lead to the same set of samples. I didn't know the formal term ("not injective"), but that's not going to help make it understandalbe to a layman anyway. In a practical sense, however, since that kind of aliasing doesn't happen when the signal is bandlimited, the most interesting thing is what happens to signal that are not quite bandlimited, and have aliasing, of the other meaning, i.e. aliasing distortion artifacts. It might be sensible to further distinguish these things, or elaborate the first, in a subsection, but in the lead we need to keep it understandable, and if we illustrate it should do so with something that looks like what people know as aliasing. Dicklyon 00:33, 2 March 2007 (UTC)
Yes, both views are represented. I know. And I am not going to change it. But my opinion is that it is unnecessarily divisive. The two viewpoints need not be different definitions, but rather just different manifestations of the same fundamental thing. The distinction is superficial. A window function applied to a single sinusoid creates new frequency components, but it does not perturb the frequency of the spectral maximum. A window function applied to two sinusoids perturbs their spectral maximums. But when we write an article about windowing or spectral leakage, we don't lead with that sort of information.
--Bob K 06:09, 2 March 2007 (UTC)

I wrote probably over half of this article and for a long time I was defending it against non-experts who only have a rudimentary grasp of what aliasing is, but I'm not so sure I care anymore.

Defocusing your lens is again not the same as what the smiley faces represented. Defocusing your lens does not make the pixels big. In fact, you can recover a focused image from a defocused image if you know what you're doing (it's called deconvolution.) However, you can't recover the smileys from the pixelized image I gave. They're aliased.
Loisel 22:17, 1 March 2007 (UTC)

New Intro[edit]

In statistics, signal processing (including digital photography), computer graphics, and related disciplines, "signals" that are essentially continuous in space or time must be sampled, and the set of samples is never unique to the original signal. The other signals that could (or did) produce the same samples are called aliases of the original signal. If a continuous signal is reconstructed from the samples, the result may be one of the aliases, which represents a form of distortion. The term aliasing can refer to both the ambiguity created by sampling and to the subsequent distortion.

For example, when we view a digital photograph, the reconstruction (interpolation) is performed by our eyes and our brain. If the original image was a lawn, we no longer see the individual blades of grass. Therefore we are seeing an alias. A more interesting example (below) is the Moiré pattern one can observe in a poorly pixelized image of a brick wall. Techniques that avoid such poor pixelizations are called anti-aliasing.

Digital imaging is an example of spatial aliasing. Temporal aliasing is a major concern in the analog-to-digital conversion of video and audio signals: improper sampling of the analog signal will cause high-frequency components to be aliased with genuine low-frequency ones, and to be incorrectly reconstructed as such during the subsequent digital-to-analog conversion. To prevent this problem, the sampling frequency must be sufficiently large and the signals must be appropriately filtered before sampling.

--Bob K 14:14, 3 March 2007 (UTC)

--Bob K 15:19, 4 March 2007 (UTC) (revision)

It's not clear to me what it helps. I think that by focusing on a reconstruction being one of the aliases you may obscure the fact that the aliasing happens at sampling, not at reconstruction. And it's an issue even if reconstruction is not the goal, and even if the reconstruction is NOT one of the aliases. Dicklyon 19:09, 3 March 2007 (UTC)
Yeah, that was my objection too. What I've changed is that the first paragraph "defines" aliasing in terms of non-unique samples, period. And there is no subsequent "Aliasing also means ..." paragraph. The first paragraph could stand alone as an introduction. The rest of the intro is just elaboration.
--Bob K 03:36, 4 March 2007 (UTC)
I hadn't noticed the removal of aliasing meaning the distortion due to this effect. Seems to me that's a common usage, and leaving it out would be a mistake. That is, the term is commonly used not for its formal meaning, but the folding of frequency components that it typically leads to. I believe I provided refs once for that usage; sometimes they say "aliasing distortion," but more often they define "aliasing" as a distortion. Dicklyon 05:24, 4 March 2007 (UTC)
Good point. It is simply an overloaded/ambiguous word (like so many others), and we have to acknowledge both meanings. Accordingly, I have patched up New Intro. Now you may argue that it is not significantly different than the current article. But I think it is a little better. See what you think.
--Bob K 15:19, 4 March 2007 (UTC)

I've taken a stab at a more accessible first sentence, putting the intro more in line with WP:LEDE. ENeville (talk) 18:28, 28 May 2009 (UTC)

I have an aliasing error versus sampling rate plot taken from Jud Strocks, Telemetry Computer Systems book, I copied. That plot illustrates the RMS error vs sampling rate when using a Butterworh filter with different pole numbers. I would be glad to share it if you want.--Scipio-62 18:49, 23 March 2010 (UTC)

An example in astronomy[edit]

I added this section as a further example that the conventional wisdom of sinc filters isn't always true. Sinc filtering the measured image g, regardless which sinc filter is used, does not lead to any accurate measurement of the radius of the star.

I am concerned about this part -- it looks like a description of speckle imaging but has some fundamental misunderstandings (see astronomical seeing for how short exposures of a star really look through the atmosphere). Star diameters are commonly measured in astronomy, but never like this. Unless people complain, I will remove this example, and replace it with a description of how star diameters are measured using speckle interferometry.

Rnt20 06:37, 11 April 2006 (UTC)

The diameter of Betelgeuse was originally measured in this way. You are incorrect when you say this method was never used to measure the diameter of a star.

Do not remove this example.

Loisel 14:45, 11 April 2006 (UTC)

"Filming a spoked wheel"[edit]

In my experience, this also happens when watching a spoked wheel. Can someone confirm? (or is this maybe evidence that I'm living in the Matrix... hm, my head hurts now ;-) -- Tarquin 15:26 Dec 20, 2002 (UTC)

This could be the case if your eye samples the scene - perhaps periferal vision does this. I sometimes get a related effect if I see a TV screen or other flickering source out of the corner of my eye - the flicker frequency appears to be much less than 50Hz. -- Easter 15:32 Dec 20, 2002 (UTC)

For really freaky effects, try watching a TV screen or CRT monitor while using an electric toothbrush. The image wobbles up and down, it was quite alarming the first time I saw it. -- Tarquin

This is what my colleagues in broadcast engineering call the ginger biscuit effect. Any vibration to the head will do. -- Easter 15:47 Dec 20, 2002 (UTC)

I think we need an article on this! -- Tarquin

This effect is caused by a failure of persistence of vision, although it could also be regarded as a kind of aliasing (with the vibration of your head providing the sampling frequency). I don't believe that the eye normally does any sampling in the time domain, at least not in a periodic way. Devices like CRT monitors rely on your persistence of vision, which is the slow response of your retina to changing or flickering images. This only works if your eye muscles can produce a stationary image on the retina. When you move your head faster than your eye muscles can track, as happens when you eat crunchy food, the TV image is spread out over your retina and appears fragmented, because parts of your retina see one field of the image and other parts see the next field, or the few milliseconds of darkness between fields. I'm guessing that this effect is more noticeable at the edge of the field of view, because the mechanism that stabilises the eyeball mainly uses data from the centre of the retina. -- Heron 11:30, 31 Mar 2004 (UTC)

I agree -- we need an article on this! What to call it? Recently I've heard it called the "Dorito effect", but Google Fight tells me that "Frito effect" is far more popular. Or is there some other name that would be better for a "serious" (?) encyclopedia article? --DavidCary 07:13, 14 October 2005 (UTC)

The Frito Effect the Frito Effect Aliasing - A new perspective

There's an exhibit at the Exploratorium that makes use of this phenomenon: Bronx Cheer Bulb. - mako 05:02, 15 October 2005 (UTC)
With regards to the electric toothbrush and the TV, I've also noticed it happen when using a vibrating neck messenger BUT it also happens when looking at the reflection of the TV/monitor in the vibrating surface.

Maybe this isn't exactly the same effect, but the most compelling example of this kind of phenomenon I have ever seen occurs when in a dark room, viewing one of those old digital alarm clocks with the red numbering (search for "digital alarm clock" in google images - first few examples are what I'm talking about) just off center, and moving your head around quickly... not violently fast, but fairly fast. The image of the numbers will appear to not be able to keep up with the physical clock, and drift off it to the left and right, up and down, pretty fair distances if you do it just right. —Preceding unsigned comment added by (talk) 02:51, 3 February 2008 (UTC)

This article has a See Also to Wagon-wheel_effect, which links to Temporal aliasing, which is re-directed back to here. Here we point out that sin(-wt+θ) = sin(wt-θ+π) = cos(wt-θ+π/2). So I'm curious why our brain perceives the wheel rotating backward (analogous to sin(-wt+θ)) instead of forward. Most optical illusions, such as the Ames window and Ames room occur when our brain tries too hard to interpret what it sees in terms of what it already knows, such as rectangular windows and wagon wheels rotating in the forward direction.

--Bob K (talk) 15:43, 7 March 2012 (UTC)

I think the "break up" mentioned in the multiplexed display article is the same as the the Frito Effect under yet another name. --DavidCary (talk) 05:25, 16 June 2013 (UTC)

Notation and fonts (four kinds of L?)[edit]

OK, so far, so mathematical, but the article now uses no fewer than four different kinds of L: can the sampling mappings please be called something different, to avoid confusion?

There's L and L^1 and L^2 and \mathcal L!

That's ok, but I'm not sure that S_{point} attempt is optimal. Perhaps S_0 and \mathcal S or S_1 would be better. You have to be careful to change all the references to L and \mathcal L if you do that, they are used consistently throughout the article. I'm going to wait and watch, but if you want me to do it, and have a notation suggestion, just let me know here. Loisel 00:40 Jan 28, 2003 (UTC)

I've just reverted my changes: they were worse, not better. I agree, I need to think of something better...
Yes, I agree, S_0 and S_1 would be better. Could you do it, please, I made a mess of my attempts to fix things last time.
Thank you, that's much better!
You're welcome. For reference purposes, let me add a detail. The standard notation for L^p spaces uses the cursive uppercase L, and it is fairly standard to also use block letter L and \mathcal L for linear maps over L^p (although \Lambda is very popular as well, even if it hints that it is a linear map into the field of scalars.) On the other hand, for a nonspecialist, perhaps using letters and symbol less similar to one another is helpful. Loisel 01:00 Jan 28, 2003 (UTC)

Aliasing and radio crosstalk[edit]

I have a question regarding this paragraph:

The term "aliasing" derives from the usage in radio engineering, where a radio signal could be picked up at two different positions on the radio dial in a superheterodyne radio: one where the local oscillator was above the radio frequency, and one where it was below. This is analagous to the frequency-space "wrapround" that is one way of understanding aliasing. However, there is a deeper way of understanding aliasing, based on continuity arguments, which is outlined below as an introduction.

This is very interesting to me. I am not completely certain that this crosstalk between radio stations is covered by the "outlined below as an introduction" portion. Unfortunately, I am not a physicist (or an engineer) so I don't know what's actually going on.

I think it would be great if someone who understands the long-winded L^2 stuff I wrote could tell me how that relates to the radio waves. I mean, the wrapping around of frequencies I describe depends completely on using a simple sampling scheme (like S_0) and so it's mostly for digital signal processing. In the analog world, I'm not exactly sure what's going on.

If anyone can give us more details about the underlying physics of the radio wave crosstalk phenomenon described above, perhaps there's a section in the aliasing article that needs to treat the analog aliasing process separately, which might be different from the dsp aliasing stuff Loisel 01:38 Jan 28, 2003 (UTC)

  • Is this origin of the term accurate? Aliasing is usually associated with sampling, rather than AM modulation: namely when a sinusoidal signal is sampled at the wrong rate it becomes identical with (i.e. becomes an "alias" for) a sinusoid of another frequency.
  • It's a consequence of the signal mixing in the heterodyning process, and would hopefully be filtered out in a decent radio design. I'm guessing it's merely due to the modulation: cos(a)*cos(b) = [cos(a+b) + cos(a-b)]/2. As for the terminology, it's conceivable that it had its origin in analog, but I wouldn't know. From what I've encountered in school, "aliasing" is strictly applied to the discrete-time situation. - mako 28 June 2005 20:21 (UTC)

Section numbering[edit]

The subsection numbers under "technical discussion" serve a purpose: the introduction to "technical discussion" refers to these sections by number. If you want to remove the subsection numbers, you'll have to change the introduction as well. For reference purposes, Encyclopedia Britannica uses numbering for some of its articles (my 1973 copy of World Wars has a complicated numbering system.) Loisel 18:49 Jan 28, 2003 (UTC)

I'm referring to this: In engineering, the method introduced in the third section is called sampling, while a method such as that introduced in the fifth section is called filtering. This discussion may be viewed as a theoretical introduction to the ideas of anti-aliasing. Loisel 18:51 Jan 28, 2003 (UTC)

  • I deleted the following outline, since the automatic table-of-contents makes it largely superfluous:Jorge Stolfi 13:59, 29 Mar 2004 (UTC)
" First we will introduce a formal notion of "continuous signal". Since there are more than one possible choices (depending on the subject at hand), we will give some general outline, but fix our attention on a specific example for the purpose of this article. Second, we will give a notion of similarity of signals. Again, this precise notion depends on the underlying physical problem, but we will provide a common example for the sake of discussion. Third, we will give the most common sampling method as an example, and fourth we will show its failings. Fifth, we will give an improved sampling method that is more in-tune with the similarity notion introduced in the second section.
In engineering, the method introduced in the third section is called sampling, while a method such as that introduced in the fifth section is called filtering.
This discussion may be viewed as a theoretical introduction to the ideas of anti-aliasing."

Aliasing in computer science[edit]

An article titled "Bishop", after hundreds of words on concept of "bishop" used in religion, had a one-line comment that a piece in chess is called a "bishop", with an appropriate link. I moved that to the beginning of the article where it would actually be seen be anyone interested. I've done the same thing here with the meaning of "aliasing" in computer science. Michael Hardy 19:18 Jan 28, 2003 (UTC)

Unidentified request[edit]

Hfastedge, don't pollute carefully written articles with your requests. That's what the talk page is for. Loisel 07:29 30 Jun 2003 (UTC)

Wikipedia bug (math in headers?)[edit]

WHAT THE HELL HAPPENED? Loisel 17:55, 29 Jul 2003 (UTC)

If you mean the table of contents, see Wikipedia:Software updates. You can turn it off via your preferences if you don't like it. If you mean something else, you're going to have to say what. --Camembert

What's this 4LIQ9nXtiYFPCSfitVwDw7EYwQlL4GeeQ7qSO business? Evercat 17:59, 29 Jul 2003 (UTC)

These are unique hashes generated by Tomasz' math functions to identify the contents of a math element. I don't know why they appear when you just type "<math>" though. Ask User:Taw.—Eloquence 18:08, 29 Jul 2003 (UTC)

Please don't use <math> in headers. Use a proper substitution.—Eloquence 18:01, 29 Jul 2003 (UTC)

Can someone fix the mathematic notations in the article?

Picture of aliasing[edit]

What we need is ONE simple picture showing a sinusoid being sampled at too high rate and matching a lower-frequency sinusoid. Then we can probably delete some 10,000 words... Jorge Stolfi 20:46, 23 Mar 2004 (UTC)

Added such a picture.Jorge Stolfi 22:16, 23 Mar 2004 (UTC)

The picture added is good for understanding the time domain aspect of aliasing. However, the reconstruction distortions are based on frequency domain filtering. I'm going to try to find a good picture for that. Mojodaddy 05:22, 20 December 2006 (UTC)

I found a good one, it's found here as the lower picture: It's far more insightful than the current discrete frequency picture. If someone could replicate something similar to that, that'd be awesome. 22:29, 21 December 2006 (UTC)

Length of technical section[edit]

The "Technical description" section was way too long and too heavy on math, it confused more than clarified the concept. Thus I have done some rather radical trimming and replanting.

Specifically, I moved most details of the "reconstruction" sections to a new page signal reconstruction, keeping only the definition of the "standard" reconstruction R. I also deleted the following paragraph since it was not germane to "aliasing"; it should go to some other page (Fourier analysis?):

" We note here that there is an efficient algorithm, known as the Fast Fourier transform to convert vectors between the canonical basis of \Bbb C^n and the Fourier basis (d_k). This algorithm is significantly faster than the matrix multiplication required in the general case of change of basis. On the other hand, wavelets are often defined so that the change of basis matrix is sparse, and so again the change of basis algorithm is efficient. "

The following paragraph was deleted too; perhaps it should go to signal processing:

"The signal could arise from a variety of physical processes. For instance, one could measure the seismic movement of the ground with a seismograph. The output of a seismograph is a strip of paper known as a seismogram. This strip of paper can be interpreted as the graph of a function. This function will be in L2 as defined above, and thus we obtain a mathematical signal from a physical process."

The following paragraphs did not seem to make sense: given that "signal" was defined as a *function*, it would seem that S_0 is always well-defined in that case. Perhaps this text was assuming that a signal could be a *distribution* (such as, e.g., Dirac's)?

"The domain of S_0 includes at least all continuous functions of [0,1]. On the other hand, for technical reasons, it is not clear how to extend S_0 to all of L^2. In particular (and perhaps more telling) is that S_0 is not continuous as a function on L^2.
Indeed, define f_k by
f_k(x)=\left\{\begin{matrix} 1, & \mbox{if }x\mbox{ is in } \left[ 1-{1 \over k},1 \right], \\ 0, & \mbox{otherwise} \end{matrix}\right.
Then, the norm ||f_k|| in L^2 is 1/\sqrt k and so f_k converges to zero. However, for any k>n, the vector S_0f_k is (0,...,0,1) and so S_0f_k does not converge to zero. Hence S_0 is not continuous."
Therefore, the sampling function S_0 very poorly represents our notion of closeness in our signal space L^2.

The following section has the same problem, and it also assumes non-trivial concepts of wavelets etc., so it should probably go elsewhere, too:

"For instance, it is possible to choose a reconstruction formula based on the Haar basis (see wavelets) in which case S_1 does not fold any high frequencies into the lower frequencies. However, this reconstruction formula (or the Haar basis) are inappropriate to most problems.
If one is giving a reconstruction formula in terms of Hilbert bases, as is our case, then one can give a "perfect" filter, which does not fold any frequencies at all, in terms of convolutions.
This sampling method, unlike S_0, is defined over all of L^2. Also, by the Cauchy-Schwarz inequality (for instance,) S_1 is also continuous in the root mean square norm. Hence, signals which alias to the same sampled vector will be related as far as the root mean square norm is concerned.

Finally the discussion of the operator S_1 does not seem to be very useful. The operator does not eliminate aliasing, it only reduces it. On the other hand, when addressing this topic one MUST discuss the sinc filter (which does eliminate aliasing) and the Gaussian filter (which does a pretty good job, and is free from ringing). In any case, this material should be in anti-aliasing, not here.

No filtering algorithm "eliminates" aliasing. Whatever algorithm you use, you will never be able to recover an unexpectedly complex signal after sampling and filtering. In the classical setting, if you have 1000 samples but the signal is sin(10000x) you will not be able to recognize it. Instead, you will guess that it's some lower frequency signal -- this, regardless of your filtering algorithm. Hence sin(10000x) aliases to something else, regardless of your filtering algorithm. With that in mind, sinc filtering is optimal in the sense I described in the article, which is not always the correct meaning. If you look at my example in the caveats, a normalized linear polynomial over T, you will see that if you apply any of the filters you mention (S_1, Gaussian or sinc) you get a very poor representation of the original function, regardless how many samples you have. In that case, the "optimal" filtering algorithm is to recover z completely using two samples cleverly. Loisel 20:47, 8 Apr 2004 (UTC)
"L^2 is contained in L^1([0,1]) (see Lp spaces.) Hence, we can define a new "interval averaging" sampling method by
S_1 f := n \left( \int_0^{1/n}f(t)dt, \int_{1/n}^{2/n}f(t)dt, ..., \int_{1-1/n}^1 f(t)dt \right)
In the case of S_1, one can analyze, via convolutions, to which extend the high frequencies are "folded into" the low frequencies. They still are, but to a somewhat lesser extent."

Jorge Stolfi 01:55, 25 Mar 2004 (UTC)

  • Jorge, obviously you made many changes without knowing what you were talking about. The paragraphs you mention above as not making any sense are in fact highly relevant. I will therefore undo some of your damage. The sampling function S_0 is in fact ill defined on L^2, which is the space of signals. This need to be mentioned. The remaining text actually said specifically what anti-aliasing was and up to what point it worked. Loisel 06:13, 29 Mar 2004 (UTC)
    • I have now fixed much of the damage. While the presentation is probably superior the way it is now, thanks to Jorge, I would request that unless you understand the details of the section marked "caveats", you refrain from further mangling this article. Loisel 06:58, 29 Mar 2004 (UTC)
  • Loisel, I don't claim to be an expert in math and signal processing, but I don think I am illiterate either. Note that a signal was *defined* in the article to be a *function* from [0,1] to C; that L^2, as defined, is therefore a set of *functions*; and that the S_0 operator merely evaluates f at specific points. Thus S_0 f is *always* well-defined, not only for L^2 signals or continuous signals, but for *any* function f defined on [0,1], even e.g. the is-rational predicate.
    Presumably what you meant is that S_0 is not *continuous*: which is true, and indeed relevant for sampling theory and practice -- but not to explain aliasing, as far as I can see. So perhaps this observation should go to the signal sampling page?
    I also don't understand the point of introducing the S_1 operator in this page, since it is neither a mathematically correct way to avoid aliasing, nor what is done in practice (where one typically uses a Gaussian-like sampling kernel, for physical limitations and mathematical reasons). Again, perhaps the S_1 operator should go to the sampling or anti-aliasing page, as a pedagogical example to introduce the concept of general convolution sampling?Jorge Stolfi 13:31, 29 Mar 2004 (UTC)
    • Or perhaps S_1 could be moved to a later section, titled e.g. "Aliasing under convolution sampling" or some such?Jorge Stolfi 14:07, 29 Mar 2004 (UTC)

L^2 is usually defined as a space of functions. However, for technical reasons, if two functions f and g in L^2 agree everywhere but on a small set (for instance, if they disagree at a single point) then it is not possible to distinguish f and g in L^2. This is because L^2 is in fact the space specified in the article modulo a certain equivalence relation. If f and g in L^2 agree everywhere except on a set E of measure 0, then f and g are indistinguishable in L^2. This is because ||f-g||=0. In order for || || to be a norm, we have to guarantee that ||f||=0 only when f=0. This particular piece of information is useful, but belongs in the Lp spaces article; at best, a reference to the relevant bit of that article should be inserted.
This nuance is often explained at some point when initially doing L^p spaces, never to be mentioned again (this is for instance how Folland and Rudin do it.) While it may be good to explain this technicality somehow, my opinion is that it belongs in the L^p spaces article, not this one. If you beg to differ, go ahead and make the change; however, you may then have to explain some measure theory so that people understand when functions in L^2 are indistinguishable.
Also worth mentioning, if one wishes to dig into the details, is that the evaluation map x->f(x), while not defined for all x \in R, is defined for all x \in E where R\E is a set of zero measure and E is called the Lebesgue set of f. Unfortunately, E changes with f, and so the "Lebesgue set of L^2" is empty. Still, the evaluation map, for a fixed x and with f varying in some subspace of V of L^2 whose Lebesgue sets contain x, is not continuous in f \in V.
The S_1 operator is just another linear map from L^2 to C^n, like S_0. However, it is better than S_0 since it is actually defined on L^2 (this refers to the paragraph I just wrote, which explains why S_0 isn't clearly defined on L^2.) It is also a better filtering method than point sampling (the Fourier transform of the point sampling method fails to decay, but the uniform averaging method has a Fourier transform which decays like sinx/x (or 1/x if you prefer) which is not very good but better than nothing. A gaussian filter has a Fourier transform that decays like a gaussian, which is much better. Of course, a sinc filter has a Fourier transform that looks like a square wave, which is often considered perfect. All the filtering methods I just mentionned do help with anti-aliasing, even S_1. Loisel 06:52, 8 Apr 2004 (UTC)

Sound example[edit]

The sound example needs more explanation: what is the sampling rate, what exactly is meant by "bandlimited", and what should the listener pay attention to. Jorge Stolfi 18:43, 24 Mar 2004 (UTC)

  • It is great now! Jorge Stolfi 13:53, 29 Mar 2004 (UTC)
    • Thanks to your suggestion. Glad you like the updated version. -- Tlotoxl 17:00, 29 Mar 2004 (UTC)

The Nyquist criterion is in fact simplistic...see below[edit]

Having specifically said that the Nyquist condition is simplistic, I don't think the following theory adequately explains why this is the case. I mean, it may explain it, but it doesn't specifically summarize why it is that the Nyquist criterion is therefore simplistic. I don't think readers should have to dig so much, only to find vague statements under caveats -- Tlotoxl 10:15, 31 Mar 2004 (UTC)

I added something at the end. Do you like it? Loisel 07:24, 8 Apr 2004 (UTC)

aliasing on TV[edit]

Shouldn't there be a mention about aliasing in TV broadcast? The reason why moderators don't have certain patterns of dress and so on. Just a thought. 11:31, 1 September 2005 (UTC)


Right at the start of the article, where it says:

Aliasing is a major concern in the digital-to-analog conversion of video and audio signals: improper sampling of the analog signal will cause high-frequency components to be aliased with genuine low-frequency ones, and be incorrectly reconstructed as such. To prevent this problem, the signals must be appropriately filtered, before sampling.

shouldn't that be analog-to-digital not digital-to-analog or am I just confused?

This paragraph seems to by trying to say that aliasing occurs when you sample analog to digital. It says improper sampling of the analog signal will cause... That seems to imply that analog is the source in this paragraph.

It seems confusing as written, someone who knows better please help me understand this. HighInBC 18:12, 27 February 2006 (UTC)

You're right. I fixed it. --Heron 19:56, 27 February 2006 (UTC)

Thanks. HighInBC 23:58, 27 February 2006 (UTC)

Animated example[edit]

Can someone explain the new animated example. It seems way too busy and confusing. Do we really need animation for this? Can it be explained with some commentary at least? I'm going to take it out for now. Please comment if you like it, or put it back with commentary, or make a more accessible example. Dicklyon 02:44, 4 July 2006 (UTC)

"A" for effort, but I don't think it's ready for Wikipedia (and vice-versa). Maybe there is an article on animated gif images. If not, better yet... let's start one. --Bob K 07:36, 4 July 2006 (UTC)

Hi! I believe that is really useful and powerful, though I think there is a small mistake. If we see one single gif frame, we can clearly understand that the main common frequency of all the plotted components is 0.5Hz =30rpm (assuming that the sampling rate is correct). But the title clearly states that the cam-follower is running at 200rpm (which doesn't have much to do with 30rpm). Therefore, either I'm missing something or there is a mistake somewhere. —Preceding unsigned comment added by (talk) 17:23, 30 April 2011 (UTC)

Rbj's false converse assertions[edit]

Apparently I've made no progress getting Rbj to understand the notion of mathematical implication, and the difference between a provably true theorem and its not-always-true converse. He has come over here to spread his joy, having "crapped up" (his words) the Nyquist–Shannon sampling theorem article in the last day or two. He has a narrow view of aliasing based on baseband reconstruction, which he is also mixing up with the notion of an alias itself. My changes are individually documented to indicate the errors in what he has done. Dicklyon 15:43, 17 August 2006 (UTC)

merge from spatial aliasing[edit]

I don't thing there's a lot of point having a separate article for this that contains three sentences, all of which are already implied by the content of this article. Any reason not to just redirect it to here and remove the link from this article? JulesH 09:56, 21 November 2006 (UTC)

  • Support. Go for it. One wonders exactly what field that stub was written about, since it talks about wave arrival direction. It's actually a time/space thing, like in seismology or ocean waves or something. Dicklyon 16:16, 21 November 2006 (UTC)
Sure enough, top several GBS hits on "spatial aliasing" are in seismology and magnetotellurics books: [1] [2]. [3] Anyone feel qualified to add a bit about that? Dicklyon 20:06, 21 November 2006 (UTC)
So are you going to write it? Is there not room in this article for what you want to add? Dicklyon 04:22, 2 December 2006 (UTC)
  • Support. Loisel 17:37, 4 December 2006 (UTC)

Done. I hope you're OK with it, CB. Dicklyon 05:55, 23 January 2007 (UTC)

Ext link to Lavry's "diatribe"[edit]

The POV "ancillary comment" about that linked PDF sounds like something I may have written in my early wikipedia days; if so, it sure lasted a long time, but you're right it sure doesn't belong. Anyway, specific complaints about the Lavry PDF include:

  • Starts with false statment "Dr. Nyquist discovered the sampling theorem".
  • It refers to the sampling theorem as a "theory".
  • It's a diatribe in the sense that "the author's motivation is to help dispel the wide spread misconceptions regarding sampling of audio at a rate of 192KHz" and yet in tons of pages he fails to make a convincing case or to support his conclusions that are basically just opinions.
  • His topic has only a tangential relationship to the sampling theorem and little relationship to the aliasing article.

So, now that the disclaimer is gone, I think I'll remove the ext link, too. Dicklyon 21:38, 22 January 2007 (UTC)

Bob K's rewrite of Sampling a periodic signal[edit]

Bob K had updated first paragraph to say:

..., the resulting samples will be indistinguishable from those of another sinusoid of frequency f_\mathrm{image}(N) = |f - Nf_s|\, for any integer N.\,   If (and only if) f_s/2 > f\,, the smallest of these "image frequencies" corresponds to the actual signal frequency: f_\mathrm{image}(0) = f\,.

Several problems here. It is ALWAYS true that f_\mathrm{image}(0) = f\,, the way he defined it, but the text can appear to be saying that f_\mathrm{image}(0)\, is the smallest; it's unclear if this was intended, or is just an incorrect reading of it, but it's confusing.
Dicklyon 19:15, 28 February 2007 (UTC)

Hmmm. You seem to have read it correctly and it was my intention, because it is in fact quite correct. Under the stated condition, i.e. f_s/2 > f,\, the minimum value of f_\mathrm{image}(N)\, is indeed f_\mathrm{image}(0)\,.   Please tell us what value of N gives a smaller result.
--Bob K 21:54, 28 February 2007 (UTC)

And if this was patched up, the "(and only if)" bit is still incorrect, unless you go to the trouble of separately excluding f at the Nyquist frequency.

No actually, we should include f = f_s/2,\, because in that case, f_\mathrm{image}(0) = f_\mathrm{image}(1) = f_s/2.\,
f_\mathrm{image}(0)\, is still the minimum.  Another "alias" is equal, but there is nothing smaller.
--Bob K 21:54, 28 February 2007 (UTC)

But it seems to me that this change is not well motivated. The way it's stated now makes the "folding" or "mirroring" property of aliasing more explicit. The folded f_s - f\, term is the usual main term that you need to care about, and the absolute value obscures more than clarifies, I think.
Dicklyon 19:15, 28 February 2007 (UTC)

Both of your "problems" are not problems afterall. So before I bother to figure out what you are trying to say here, I would like to know where you now stand. I would also point out that and and myself were all confused by the 24-Jan version. It is not wrong (except for a nitpik), but it can be simplified and thereby clarified. That is the motivation.
The nitpik (similar to your if and only if "problem") is the statement:
"If f_s > 2f\,, the lowest of these image frequencies will be the original signal frequency"
That is true in my version, but in the 24-Jan version the image frequencies are:   "Nf_s-f\, and Nf_s+f\,, for any integer N.\,"   "Lowest" does not mean "closest to zero", so what about negative values? What about N = -\infty ?\,
--Bob K 21:54, 28 February 2007 (UTC)

The statement that is correct but got you confused anyway can certainly be rephrased. Do you prefer this for instance?:
..., the resulting samples will be indistinguishable from those of another sinusoid of frequency f_\mathrm{image}(N) = |f - Nf_s|\, for any integer N.\,   If (and only if) f_s/2 \ge f\,, the smallest of these "image frequencies" occurs at N=0.   And its value: f_\mathrm{image}(0) = f,\,  is the actual signal frequency.
--Bob K 22:23, 28 February 2007 (UTC)
Yes, that appears to be correct and more acceptable. I'm still not sure what's wrong with the present version, or why this is an improvement, though. I see we do have the same error at Nyquist frequency in the other one, fixable the same way. This fix is however a bit misleading, as it confusing getting the frequency right with avoiding aliasing. Hmmmm... I'll let you change it as you see fit, given that I've had my say. Dicklyon 01:06, 1 March 2007 (UTC)

In my opinion, the section on subsampling a sinusoid is really unclear and poorly done. — Preceding unsigned comment added by (talk) 21:51, 23 February 2012 (UTC)

Sections 5.1-5.6 (Function approximation theory)[edit]

In section 5.1, we have:

For the purposes of this analysis, we define a continuous-time signal as a real or complex valued function whose domain is the interval [0,1].

In section 5.6, we conclude with:

S_\mathrm{opt}g=S_0(\mathrm{sinc}*g) \
where \mathrm{sinc}*g \ is some sort of sinc filter or sinc function.

In case it isn't obvious, I think what we are saying is that an anti-aliasing filter (and a rectangular one at that) is a good thing to do before sampling.

But how do we confine a sinc function to the interval [0,1]? Is that why we need the "some sort of" caveat? And how do we design a "sort of sinc" filter?

Frankly, these sections don't do much for me. Why "teach" functional analysis here? The relevant points seem to be these:

  • A mapping from an higher dimensional space to a lower one loses information that cannot be recovered.
  • Filtered sampling is more realistic than instantaneous sampling.
  • Filtered sampling is "better" than instantaneous sampling, from the viewpoint of aliasing.
    • (assumes the distortion caused by the filter does not count)
  • An anti-aliasing filter with instantaneous sampling is equivalent to filtered sampling.
  • "Some sort of sinc" filter is the optimum anti-aliasing filter.

Am I missing something? Are these unsurprising (and in some cases vague) points enough to justify section 5?

--Bob K 14:18, 10 March 2007 (UTC)

I agree. This essay on filtering and sampling is out of place in this article, and is not very logical constructed. I haven't looked around to see if such material is already covered well in sampling or some such place. I'm OK with cutting it out. Dicklyon 15:44, 10 March 2007 (UTC)

Historical usage[edit]

I think this section may be largely incorrect, or at least somewhat wide of the mark. In my recollection, and what few revelant refs I've been able to find, an alias in radio is just am image frequency. Most of the discussion about spectrum reversal seems not quite relevant. The point of aliasing is not so much that you have a choice of high-side or low-side local oscillator, but that after you've chosen you still have to fight the image or alias from the other side. Is there a book that either makes this clear or supports the current text? Dicklyon 16:58, 18 March 2007 (UTC)

This link discusses the low and high side injection and cites several books: [4]
But it does not confirm that as the origin of the term "aliasing" (if that is your concern). I don't know where that idea comes from. I first heard it right here, at Wikipedia.
--Bob K 18:48, 18 March 2007 (UTC)
Google book search doesn't find anything old enough, but does find this one. I believe this is more or less representative of what I recall, but I'd look at some old ham radio books, or Terman, or something. Unless we find a source, I'd be inclined to just remove the whole section. Dicklyon 19:19, 18 March 2007 (UTC)
Here is where it first appears:
The contributor was Jorge Stolfi.
--Bob K 22:38, 18 March 2007 (UTC)
Dick, I checked your reference, and it is more consistent with the "modern" meaning of "aliasing" than Jorge's description. That inconsistency is what led me to write the caveat: "Even when the aliases are identical there is still a fundamental difference between the modern and historical meanings. ..."
Your reference is poorly written, but if you stick with it, what it says is that a signal at f_{LO}+f_{IF}\, and at the same time (same f_{LO}\,)a different signal at f_{LO}-f_{IF}\, will both end up at f_{IF}\,. (Low-side injection for one signal is high-side injection for another.) Two different signals map to the same frequency, just like sampling causes different sinusoids to produce the same samples (before interpolation) or at least the same frequency (after interpolation).
Since that looks like a legitimate use of the word "aliasing", I think we should just fix up the description, rather than delete the whole section. I think it is interesting.
--Bob K 23:10, 18 March 2007 (UTC)
I like your fix. I'd still like to see an older ref; I'm not sure this is actually the historical basis of the modern usage of the term. Dicklyon 10:33, 19 March 2007 (UTC)

Expert needed[edit]

I added the "expert needed" tag. I used to keep an eye on this page but I have stopped and there's some nonsense in there now, but I don't have time to take care of it. I hope the current editors do find an expert. Loisel 01:46, 21 June 2007 (UTC)

I am an expert in this, and on review I agree that we have allowed some not-quite-right things to creep into it. BobK is also an expert, I believe, but I think it was some of his edits that introduced some things I would quibble with. Bob, I'll try some edits and see if you agree or object. Loisel, please comment if what I'm doing misses what you had in mind. Dicklyon 04:53, 21 June 2007 (UTC)
OK, I just did a round of edits, mostly minor. If there's some major content that was removed that you think should come back, point it out. If errors remain, point it out. I suspect that with the three of us and ohters helping we should be able to tune this up. Of course, more references would be a big help, so feel free to call for citations wherever you think is appropriate. Dicklyon 05:38, 21 June 2007 (UTC)

Hello again. Some random comments:

Did you intend to keep both of these redundant statements?:

  • An example of image aliasing is the Moiré pattern one can observe in a poorly pixelized image of a brick wall. Techniques that avoid such poor pixelizations are called anti-aliasing.
  • Another Moiré pattern is evident in the poorly pixelized image of a brick wall (see figure). Techniques that avoid such poor pixelizations are called anti-aliasing. Nanren888 07:52, 31 August 2007 (UTC)

The sun example that you removed predates my involvement. I just fixed it up a bit. I am sorry to see it go away, but no biggie. However, removing it also removes the reason for introducing the symbol f_s\, so early in the article. Consider postponing its introduction until it is needed.

And it also removes the concept of negative frequency (like a wagon wheel going backward). So now the statement: "And the concept of negative frequency is not necessary, because there is always an identical sinusoid with a positive frequency..." just appears out of nowhere. But I would not advise removing the concept of negative frequency. I think it is useful and interesting to make the point that some phenomena, such as sun motion and wagon wheels and complex sinusoids, are directional and require signed frequency, but real-valued sinusoids do not.

--Bob K 13:02, 21 June 2007 (UTC)

Bob, thanks, those are good comments and ideas. If you think the sun example wokuld be good to bring back, you could do that. But I personally found it hard to interpret, relating things like 1/24 - 1/25 to f - fs, which weren't initially obvious, so I figured a reader less familiar with the subject would be even more confused. I agree that rotary motion looks a lot like complex oscillation, so the concept of negative frequency can be useful, but it should probably come late in the description, not early in a first example.
I'm not sure we're getting at what Loisel was complaining about, but I hope you agree that the changes I made are mostly in the right direction; and I apologize if I my implication that you caused part of the problem was off base, because I do usually appreciate your contributions to such articles. Dicklyon 18:02, 21 June 2007 (UTC)


Ok, well known, but incorrect. No way can I go with folding over negative frequency. Putting people on completely the wrong track is not better than telling the truth, even if the truth takes a little more work to get clear. I'll read up on how Wikipedia & editing works, but I added a comment to this effect today & had it removed by Oli Filth. Sorry mate, wrong. Improve it by all means, but there is no mechanism for "folding" & we should no proliferate the misunderstanding. As I said, I'll try to make time to learn enough to do the job properly, but maybe one of you guys who do this all the time could just deal to the "folding" thing & set it straight to save me the trouble. —Preceding unsigned comment added by Nanren888 (talkcontribs) 07:48, 31 August 2007 (UTC)

I'm not sure I understand what your grievance is with the use of the term "folding". Care to explain a little further? Oli Filth 09:21, 31 August 2007 (UTC)
I understand the grievance, but this is not an article about the Fourier transform, which isn't even mentioned. The heart of the matter is Nanren's correct statement: "folding would require reversal of the frequency axis & the sampling process has no mechanism for such." I.e., the term "folding" only describes the illusion, not the mathematics. Another such term is "the wagon wheel effect". Does Nanren object to that also? (In fact what about the term "aliasing" itself?)
What's actually incorrect here is Nanren's assertion: "This is, of course, completely wrong,".   No, what's completely wrong is to infer that "folding" is a mathematical description or to insist that it must be. But that claim is never made.
--Bob K 13:47, 31 August 2007 (UTC)
I don't understand the grievance at all. Sampling and modulation operations provide a mechanism for frequency differencing, such that a higher frequency at the input results in a lower frequency at the output; the frequency scale effectively folds over and runs backwards. Why is this description not acceptable? Dicklyon 16:14, 31 August 2007 (UTC)
Sorry Dick. There is no folding. (I went looking for your papers to best phrase the response, but not much help, I'm not really a maths guy, I'm an engineer) As there is no "folding" I feel it is wrong to use this in explanation & more than that, probably right to point out to the learner that they will hear this & it is incorrect. I suspect my point comes clearer if you take real signals as a special case of complex signals. I suspect that in any case that you have a frequency reversal, you can substitute in a signal that is complex & the spectrum is not Hermitian (Conjugate symmetric) & you will likely see that those frequency components you saw mirrored are rather more like the negative ones, in the normal order, rather than the positive ones reversed. Sure when you have only real signals, this looks like a reversed axis as the negative spectrum looks like the positive one reversed, ignoring a phase reversal. I think my point is really that the "local symmetry" (if I understand what is intended by that expression) is created by the real signal & consequent hermitian spectrum, not the sampling. But, I think the point is that if you think folding, then sometimes you'll get things wrong. If you straighten this out once, you'll get it right every time. I do not know the best way to explain it. I fell into this conversation by accident: I was finishing off patent using bandpass sampling & wanted to check what others thought of the word "Aliasing". I have oftentimes tried to dispell this "folding" thing & seeing it here, though I'd have a go. Nanren888 05:21, 1 September 2007 (UTC)
Having read the page a few times now I think there are important aspects missing. What I was looking for was does "aliasing" mean "broken" to most people? It seems it does, but I suspect a service would be done by stating that the word is used rather loosely & in more than one way. If I have two sample streams representing complex samples of a complex signal, I cannot say there is no aliasing in terms of frequency shift, nor frequency component overlap. I can say that I may recover the signal. Clearly if I treat the two streams, real & imaginary, I&Q for comms people, as separate real data streams, the spectra overlap. But I know how to put them together to create the complex signal they represent, in such a way that the right portions cancel & the correct complex form results, eg I-jQ. Whether it is "aliased" or not, in the sense of not being recoverable, retains an element of what else you know, not just the data. A little philosophical for this page, yes, but I feel the page should be true to this. I suspect we often use "alias" to mean an image at a different frequency, or band position. We likely use "aliased" more often to mean unrecoverable in some way related to consequent overlap of the spectra. For those uncomfortable with complex signals, perhaps just take a real sampled signal & separate it into two streams by taking every second sample, ababababa. Clearly, if the original sampling was sensible, each of the new streams is likely "aliased" if taken in isolation. But, we know better: we know how to recover the signal, well if we remember which band position to reconstruct into, & hence the interpolant to use anyway.Nanren888 05:21, 1 September 2007 (UTC)
If we mean imaged at other frequencies, then all digital signals are aliased, in that we normally imaging the digital samples are parts of some notional continuous time signal provided by place delta functions at the sample instants. The spectrum is periodic & as they were deltas, represents images all the way up & down the infinite axis. Usually we are focused on one, or two images & filter to capture or reconstruct from only those. The discussion on the theory & practicalities of this unwanted energy from other signals & images, I suspect is where the focus should remain, but with some tweak to the fundamentals.Nanren888 05:21, 1 September 2007 (UTC)
I believe we have carefully written the lead paragraph precisely to avoid the confusions that you are speaking of: "aliasing refers to an effect that causes different continuous signals to become indistinguishable (or aliases of one another) when sampled. It also refers to the distortion or artifact that results when a signal is sampled and reconstructed as an alias of the original signal." There's nothing there about the impossibility of recovering the correct signal. But the aliasing artifact can result from a particular kind of reconstruction, such as baseband reconstruction via the Whittaker–Shannon formula. If you do that, then frequencies of real signals do fold into the band 0 to half the sample rate, as conventionally described. Your explanation above has not come closer to helping me understand why you think such a description is inadequate or incorrect. Dicklyon 05:50, 1 September 2007 (UTC)
In Sampling (signal processing) there's discussion of sampling the FM RF band at 56 MHz, in which case the baseband alias is frequency reversed. That's true, right? If you were to reconstruct as a baseband signal, the high FM channels would be at lower frequencies than the low FM channels. That's called folding, is it not? Of course, if you have complex signals with distinct positive and negative frequencies, you can treat them as jumping from one end of the two-sided signed frequency band to the other, rather than reflecting off the end of the non-negative one-sided interval. But such a description is not particularly apt for the typical case of real signals. Dicklyon 06:05, 1 September 2007 (UTC)
In Temporal aliasing, referred to by this page, Etymology: [In electrical engineering, when a continuous signal is replaced by a series of samples — say, a 24.1 Hz signal is sampled 24 times per second — the result seems the same as if a 0.1 Hz signal were sampled 24 times per second, so 0.1 Hz is said to be an "alias" of 24.1 Hz.]
This meaning does not come out clearly to me in the introduction. Also it seems a little off to use "alias" in the explanation of "aliasing".Nanren888 07:10, 1 September 2007 (UTC)
Nothing is frequency reversed by sampling or aliasing. Therefore there is no "folding". In real signals, the negative spectrum is a conjugate symmetric image of the positive spectrum. This should not be attributed to the sampling process. The sampling process creates "aliases" in that we treat it as notionally infinitely many images at infinitely many band positions. I agree the explanation seems easier, but it is incorrect & misleading. I maintain that it is wrong to introduce an incorrect aspect in this way when correct reasoning is almost as easily accessible. There is no folding, no jumping, only images that look like shifting, & overlaying. Maybe try drawing the two-sided version of the fm figure, label the positive frequency spectra as "P" & the negative side as "N", create the images from sampling, I have not checked your maths, but if as you say the band position is such that the baaseband is reveresed, then it will show as an image of the "N" side that ends up in baseband & not reversed. For me the bottom line is that if you have "folding" then you need special rules for working out whether the band position leaves the normal or reversed image at baseband (or anywhere else of interest) & then you need new rules to avoid this for complex (non-conjugate symmetric spectra) signals. The other way there is only one rule, according to the theories already provided. I'll have a think about whether I can come up with an explanation without relying on the engineering perspective of immediately leaping to Fourier. If not, I'll try to make time for a paragraph on an alternative view.Nanren888 07:10, 1 September 2007 (UTC)

I am bound to repeat some points already made above, but for what it's worth:

Consider a Fourier transform shaped like an isoceles triangle with its peak at 0 Hz and a base width (two-sided) of 12 Hz (i.e., ±6 Hz). Now sample the waveform at F_s = 10 Hz. In the region between 4 Hz and 5 Hz (F_s/2,), it looks like the [5,6] region "folded" back into it. But it only looks that way because the isoceles triangle is symmetrical. If we left the right side (positive frequencies) alone and multiplied the whole left side by 99 (which would require a complex-valued waveform in the time domain), the spectrum of the sampled waveform would no longer look like the [5,6] region "folded" back into the [4,5] region. Rather, it would look like the [-6,-5] region got added directly (i.e., not in reverse order) to the [4,5] region. Nothing got "folded". And indeed, that same explanation works for the symmetrical case. It is the "right" explanation for both cases, because real-valued waveforms are just a special case of complex-valued waveforms (as Nanren said). Folding is just an illusion, generally associated with real-valued waveforms and sampling.

But the Wikipedia article does not say that a real-valued sinusoid at frequency  0.6 F_s/2\,  "folds" to frequency  0.4 F_s/2.\,  It just says there is an image (or "alias") at  0.4 F_s/2,\,  which is true, because there is the negative one at at  -0.4 F_s/2,\,  and  \cos(2\pi (-0.4) F_s/2\cdot (n/F_s) + \theta) = \cos(2\pi (+0.4) F_s/2 \cdot (n/F_s) - \theta).\,  And then it states the simple fact that the common name for this symmetry is "folding". It's just a name, not physics. No doubt there are people who misuse it and/or make incorrect inferences and statements, but wouldn't it be stating the obvious to say that in the article?

--Bob K 07:36, 1 September 2007 (UTC)

Ah, but whether negative-frequency components "exist" depends entirely on the way that you're analysing the signal, namely one of:
  1. We can break sinusoids and cosinusoid components onto complex-exponential basis functions. Aliasing causes the components to "wrap", i.e.: \ f \rightarrow ((f + \pi) \bmod 2\pi) - \pi.
  2. We can work with positive frequencies only, i.e. the basis functions are the sinusoids and cosinusoids themselves. Aliasing causes the components to "fold", i.e.: \ f \rightarrow |\pi - (f \bmod 2\pi)|.
I'm not sure how we can treat one as more valid than the other, and hence say that the "folding" model is purely an illusion, or that the maths doesn't support it. Sure, the "wrapping" model leads more neatly into the negative-frequency analysis of complex signals that we currently mention in the article, but it's equally possible to analyse complex signals without resorting to negative frequencies. Oli Filth 11:12, 1 September 2007 (UTC)

The article says "That effect is known as folding." Many sources describe it that way. If our description is not as good as it should be, then it should be tuned up with respect to one or more reliable sources. If there's a source that says that folding is an incorrect or inadequate view, that should be used and cited as well. Let's get back to what this talk page is for, which is discussing the article, not discussing our own idiosynchratic views.

Dicklyon 16:38, 1 September 2007 (UTC)

Oli, saying that we don't have to resort to negative frequencies is not the same as saying they do not exist. We don't "need"  \cos(2\pi (-0.4) \cdot n/2 + \theta) \,  for anything, because it is indistinguishable from  \cos(2\pi (+0.4)  \cdot n/2 - \theta).\,  However,  \cos(2\pi (-0.4)  \cdot n/2 + \theta)\,  is a well-defined function. It does exist.

--Bob K 16:49, 1 September 2007 (UTC)

When I talk of "negative frequencies", I was referring to complex-exponential components (i.e. phasors), not (co)sinuosoids. In such a case, a negative-frequency component is not indistinguishable from its positive-frequency partner. Oli Filth 23:36, 1 September 2007 (UTC)
Of course it is.  I.e.:  e^{jwt} \ne e^{-jwt}\,
Apparently, I'm just not following your line of thought. Suggest you start over with more detail, if you want to be understood.
--Bob K 01:09, 2 September 2007 (UTC)
Ok, I'll try again! When dealing with Fourier decomposition, we can take two approaches. Either, we can break a signal down into complex exponentials, or we can break it down into (co)sinusoids. In the first case, the use of negative-frequency values is mandatory (except for analytic signals); there is no way to represent the signal without them. Using this viewpoint, the appearance of "frequency-reversed" aliases during sampling can be successfully explained by the mirror-image negative-frequency components shifting up (wrapping); there is no need to talk of "folding".
In the second case, negative-frequency components do not "exist"; in the sense that our signal decomposition contains only positive-frequency components. Therefore, the appearance of the "frequency-reversed" aliases cannot be attributed to negative-frequency components shifting, as our signal decomposition does not contain any. Therefore, with this viewpoint, the only way to successfully explain the "frequency-reversal" effect is via some kind of "folding" "mechanism". Oli Filth 11:13, 2 September 2007 (UTC)
Well-typed. Thank you. In the 2nd case, folding can be viewed as an artifact of artificially constraining the space of solutions. It's like using a number system of only positive numbers (like Roman Numerals), and therefore the only square root of 4 is 2, not -2.
"In the best farce to-day we start with some absurd premise as to character or situation, but if the premises be once granted we move logically enough to the ending."
George P. Baker
So my conclusion is that case 2 starts with a false premise and reaches a false conclusion.
--Bob K 13:25, 2 September 2007 (UTC)
It seems we're rapidly descending into a discussion on semantics! I'm not sure how decomposing a signal onto (co)sinuoidal basis functions is "constraining the space of solutions", as I'm not sure what constitutes a "solution" in this sense. As far as I can see, it's still an orthogonal basis, so it's an equally valid decomposition/transform (in fact, it's really just a generalisation of the trigonometric Fourier series), and arguably, a more intuitive one to the lay reader.
Note that I'm not arguing for the removal from the article of any discussion of negative frequencies. What I'm trying to get at with all this is that whether "folding" is an "illusion" or not is entirely dependent on how one chooses to perform the signal analysis (as the signal only "contains" negative frequencies if we choose to talk about exponential frequency components). Given that this article isn't explicitly written in the context of the exponential Fourier transform, stating that "folding is an illusion", without further qualification, is somewhat opaque in my opinion. Oli Filth 14:32, 2 September 2007 (UTC)
I agree. It's not an illusion when it's a very real phenomenon that affects how radios work and things like that. When your signals are real, and reconstruct via filtering to a band, you have folding, no matter what math you use to analyze the signals. Dicklyon 15:48, 2 September 2007 (UTC)
I don't think we disagree on very much, if anything. Choosing a set of basis functions, like choosing a number system, has implications, possibly awkward ones. One can create what I will call the "illusion" that the only square root of 4 is 2, by limiting one's vision to the positive integers. Anyhow, to keep things light, this reminds me of a great joke:
"There are 10 kinds of people; those who understand binary arithmetic, and those who don't."
--Bob K 16:01, 2 September 2007 (UTC)

Can we agree on these points?:

  • Nanren has made a valid point: The term "folding" can mislead some people. It's not perfect, but that's a common flaw of words. That's why mathematics was invented.
  • The term "folding" is the common convention, whether we like it or not.
  • The only issue is whether Wikipedia should editorialize our reservations about the popular convention.

--Bob K 17:06, 1 September 2007 (UTC)

I agree with point 2. On first point, I'd like to see the evidence that someone is confused by it (other than Nanren himself). As to point 3, there's no question that it is inappropriate for us to editorialize; we can report if a reliable secondary source has editorialized about it. Dicklyon 17:12, 1 September 2007 (UTC)

Evidence of the first point is provided by [5], which states 'Some texts use the term "folding", while others mention this only as "aliasing" '. So why would a textbook avoid mentioning such a widely accepted convention? That would be irresponsible, unless the author has a principled objection to the convention. Rather than editorialize, they simply don't use the flawed convention. If that is the general behavior, then I guess this falls into the category of "can't prove a negative".

--Bob K 18:41, 1 September 2007 (UTC)

Thanks for that reference. I'm not sure I'd interpret the omission of an additional concept as evidence that it would be confusing or misleading, but I agree that we could mention that not all texts use the folding concept, referencing this book as source. This argues a bit against the idea that it's "widely accepted"; certainly no way to interpret this an those who omit it being irresponsible, nor evidence for any "principled rejection". Like many concepts, authors choose to use what they like or need in their exposition; don't read too mjuch into it. Dicklyon 19:32, 1 September 2007 (UTC)

An example of the kind of confusion Nanren is talking about can be found in the new book[4] by renowned author Frederic J. Harris, p 34, Fig 2.27. The "remnants" pointed out in the second of 3 graphs are not mirror images of each other. They bled in from different adjacent channels. But in the third graph, they are shown symmetrically positioned around  f_s/2\,  and the lower one is referred to as a "folded remnant".

--Bob K 01:09, 2 September 2007 (UTC)

Interesting... I'll have to try to get access to a copy to see what you mean about his confusion. Dicklyon 01:55, 2 September 2007 (UTC)
Check your e-mail.
--Bob K 04:45, 2 September 2007 (UTC)
Thanks for the fair-use figure scan, Bob. You're right, he sure did get it wrong. Since his spectra are not symmetric about 0, this figure is presumably about the complex case, in which case the folding concept is in applicable, as you've noted in the article. But you're right that this is evidence that the folding concept does sometimes mislead even smart people into getting it wrong. So I can now agree with your first point. However, it would still be WP:OR for us to report such an analysis based on this figure, so where do we go next? Dicklyon 06:23, 2 September 2007 (UTC)

Many thanks for the discussion guys. I liked all the points. Out of interest. (1) I don't think I'm confused about folding. (Maybe that's the worst kind of confusion). (2) I have not run into fred for a LONG time, but he always used to really strongly insist on his name being lower case. (3) On "So why would a textbook avoid mentioning such a widely accepted convention?", good question. Seems there are 3 options, ignore "folding" (for whatever reason), go with "folding" or acknowledge "foldling" & point out the issue, eg that it leads easily to assumptions of completely the wrong mechanisms. I wanted the last one. Can you give me some advice. I like the idea of citing a reference on this topic. The aliasing topic covers a wide area, including many publications, probably none read by all users. Where should the citation be from? Nanren888 06:04, 2 September 2007 (UTC)

Yeah, I was surprised about his capitalized name, too. Maybe he gave up. Anyway, cite whatever you can find that discusses the issue. I usually search via GBS, plus whatever's on my shelves. Dicklyon 06:16, 2 September 2007 (UTC)
Looks like he hasn't given up; the book cover on Amazon clearly shows all lower-case frederic j. harris. Dicklyon 06:27, 2 September 2007 (UTC)
Yeah, I checked too, before writing. On the citation thing: I think we will only find those who use "folding" & those who don't. For the longer term answer, I am quite tempted to write something targeted to this issue, not only to cite & avoid this inability to be "editorial", but also to have a reasonable attempt to effect the same change to the aliasing vocabulary of anyone who has Fourier & complex numbers. Any thoughts on where to submit it?Nanren888 07:05, 2 September 2007 (UTC)


  1. ^
  2. ^
  3. ^
  4. ^ Harris, Frederic J. (2006). Multirate Signal Processing for Communication Systems. Upper Saddle River, NJ: Prentice Hall PTR. ISBN 0-13-146511-2. 

Reconstruction filtering ("postaliasing"), etc.[edit]

Does a poor reconstruction filter actually count as aliasing? No frequency components are being aliased, and it is an invertible process.

In fact, this has already been alluded to above (#Layman's terms?, #New Intro). Oli Filth(talk|contribs) 20:08, 21 April 2009 (UTC)

I think a specific example will help. So suppose the sample-rate is 1 sam/sec, and the signal has a component at f=0.4. Sampling then produces aliases at ..., -0.6, 1.4, 2.4, ...
If a perfect reconstruction filter produces just e^{j 2\pi(0.4)t},\, an imperfect one might produce e^{j 2\pi(0.4)t}+\epsilon\cdot e^{j 2\pi(-0.6)t},\, for some small value of \epsilon.\, This is an example of a low frequency (0.4) producing an alias at a higher frequency (-0.6). But it was actually produced by sampling, not by the filter. And the alias is still removable by a better filter.
--Bob K (talk) 14:19, 23 April 2009 (UTC)
That being the case, is the stuff about reconstruction filters/"postaliasing" in the article valid? Oli Filth(talk|contribs) 16:27, 23 April 2009 (UTC)
There's a pretty good source cited for that; looks right to me. Any particular concerns? Dicklyon (talk) 16:50, 23 April 2009 (UTC)
Yes, but perhaps it's just a case of how one uses terminology. The article currently says "Aliasing can be caused either by ... the reconstruction stage"; from what I can deduce, this doesn't match Bob's explanation above ("aliasing is produced by sampling"), and flies in the face of how I interpret the concept of "aliasing" (specifically, a non-reversible process caused by sampling below Nyquist). Oli Filth(talk|contribs) 17:07, 23 April 2009 (UTC)
That's why Nils had added the clear distinction between the different, but closely related, common uses of the terminology. You say "No frequency components are being aliased" but that's a matter of how you define it; for post-aliasing, the reconstructed signal includes frequencies that are the aliases of the original frequencies; the reconstruction anti-aliasing filter tries to keep just those that you want, or that correspond to the original input (which are not necessarily consistent). Dicklyon (talk) 17:13, 23 April 2009 (UTC)
Yeah, I guess it's a matter of definition. I've always interpreted "aliasing" as "ambiguity" (i.e. irreversible), of which there is none during reconstruction. In fact, the "aliases" in reconstruction are there whether or not you choose to reconstruct, due to the inherent spectral periodicity implied by a discrete sequence; poor reconstruction just allows you to observe them. So I guess I take issue with the claim that "aliasing is caused by reconstruction", which I think is misleading. Oli Filth(talk|contribs) 21:55, 23 April 2009 (UTC)
My interpretation of the article is that the "aliasing caused by sampling" (which the article tries to call prealiasing) is simply the fact that multiple continuous signals can fit one set of samples. When a particular reconstruction technique picks the wrong signal,[1] the so-called postaliasing occurs. But when the reconstruction technique picks the right signal (such as the strictly bandlimited case and the Nyquist-Shannon Interpolation Formula is used), the prealiasing does not lead to postaliasing.
  1. ^ For instance, if the Nyquist-Shannon formula were applied to the samples of e^{j 2\pi(-0.6)t},\, it would incorrectly produce e^{j 2\pi(0.4)t}.\,
The DTFT could also be considered an example of postaliasing, because instead of producing the Fourier transform of the continuous signal that was sampled, it produces the transform and all its aliases. (Similarly the DFT, instead of producing samples of the Fourier transform of the continuous signal that was sampled, it produces samples of the transform and all its aliases.) But in the strictly bandlimited case, a better reconstruction technique than the DTFT is the product of the DTFT and a rect() function that removes all the aliases.
So prealiasing really means "potential aliasing" or "ambiguity", and postaliasing means "actual aliasing".
--Bob K (talk) 01:20, 24 April 2009 (UTC)
I didn't understand the point of your rewrite based on this understanding. And it included more strange terms that appeared to be attributed to the same cited source, but which I could not find in that source. I think it's better as is, so I reverted it. Dicklyon (talk) 20:00, 25 April 2009 (UTC)
IMO, it would have been better to simply remove the new terms, and leave the merge alone. I.e., now the first two paragraphs are redundant again. I added the new terms, because they are more accurate descriptions than the sourced ones, but I'm not surprised that you would object. I am surprised however that you would restore the redundant paragraphs.
--Bob K (talk) 04:00, 26 April 2009 (UTC)
Don't be too surprised; when I'm feeling lazy I just revert and state my objection, hoping you'll do better on the next try. I wasn't really keen on changing the first meaning of aliasing, however, so we might need to haggle some more about that. Dicklyon (talk) 04:08, 26 April 2009 (UTC)
OK. Aliasing doesn't really matter until you reconstruct the wrong signal, so that's the definition people will most easily relate to. The other definition is esoteric.
--Bob K (talk) 05:31, 26 April 2009 (UTC)
I disagree. Sometimes (often) sampled signals are analyzed, but not reconstructed. If aliasing happens at the sampling time, due to the presence of aliases in the input signal, due to a lack of antialiasing filter for example, then the analysis may be confused about the properties of the signal being analyzed. It's the fact that different input signals can give the same samples that is the key problem, whether you want to reconstruct or not. Dicklyon (talk) 05:38, 26 April 2009 (UTC)
I suspect our different viewpoints hinge on whether or not one considers spectral "analysis" to be a form of reconstruction. I do, because you are trying to reconstruct the original Fourier transform, but you can't (except under special circumstances assumed by the sampling theorem). All you can construct is the DTFT or a sampled version of it (the DFT).
--Bob K (talk) 15:34, 26 April 2009 (UTC)
A digital comms receiver doesn't "reconstruct" a signal, and aliasing at the A-D converter certainly matters! Oli Filth(talk|contribs) 17:38, 26 April 2009 (UTC)
Prealiasing is unavoidable in A/D conversion, no matter how fast you sample. Prealiasing does not have anything to do with the Nyquist rate. It only has to do with the obvious fact that multiple continuous "signals" can fit any set of discrete samples. That is so obvious, it's hardly worthy of mention. Its obviousness is the reason that so many of us are shocked (and delighted) to learn that "perfect" reconstruction is ever possible (if you consider infinite time in both directions to be "possible").
--Bob K (talk) 19:22, 26 April 2009 (UTC)
But Prealiasing is avoided by using an anti-aliasing filter to assure that the space of input signals does not include any different signals that are aliases of each other; at least in theory; right? And you can do this by using a band limit and sample rate that satisfy the Nyquist constraint, right? Dicklyon (talk) 20:23, 26 April 2009 (UTC)
The first definition of aliasing in the article is "aliasing refers to an effect that causes different continuous signals to become indistinguishable (or aliases of one another) when sampled." In my example above, the prealiases at ..., -0.6, 1.4, 2.4, ... exist whether you filtered the signal before sampling or not. If you do a DTFT, you will see them. If those are not prealiases, then the article needs more work on that point.
--Bob K (talk) 22:09, 26 April 2009 (UTC)
That are indeed the aliases. But whether you have prealiasing or not depends on whether those are within the input space of your system. It might be a good idea to find a better way to express this, as you suggest. Dicklyon (talk) 23:17, 26 April 2009 (UTC)