Talk:Quantization (signal processing)

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Physics (Rated Start-class, Low-importance)
WikiProject icon This article is within the scope of WikiProject Physics, a collaborative effort to improve the coverage of Physics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Start-Class article Start  This article has been rated as Start-Class on the project's quality scale.
 Low  This article has been rated as Low-importance on the project's importance scale.
 
WikiProject Professional sound production (Rated Start-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Professional sound production, a collaborative effort to improve the coverage of sound recording and reproduction on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Start-Class article Start  This article has been rated as Start-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
 

Several comments on page[edit]

I am not an expert with Wiki markup, so I do not plan to edit the page directly. However, I have some brief comments/opinions that may be of interest to the next person who decides to edit this page:

1. I think the basic description of "quantization" could use a little bit of tweaking. The current definition is: "quantization is the process of approximating a continuous range of values (or a very large set of possible discrete values) by a relatively-small set of discrete symbols or integer values." I believe that the definition could be made more precise: "quantization is the non-reversible process of approximating a value to one of a countable set of values." Then, specify that the original value can be from a continuous and uncountable domain, and can be multi-dimensional (i.e. vector vs. scalar quantization). The problem with the current definition is that the set that is mapped to by the quantizer does not necessarily have to be "small". In fact, the set being mapped to could have infinite cardinality. The only restriction that is placed on a quantizer is that the range of values being mapped must be countable.. i.e. mappable to the set of integers in some way.

2. I disagree with the floor in the scalar quantization function. It is not correct to say that scalar quantizers perform a floor.

3. Some examples will be useful for non-technical readers. I would recommend the classical "round to the nearest integer" example, as well as an example with non-uniform scalar quantization. The non-uniform quantizer doesn't have to be useful in practice, but it is to open the mind of many readers who might think a quantizer must always "round" in some regular fashion.

4. Demonstrate that quantization is non-reversible by stating a simple example: For the "rounding to the nearest integer quantizer", demonstrate that the quantizer would round 2.6, 2.8, 2.95, 3.3 all to the value of 3. But given (only) the quantized value of 3, there is no way of recovering what the original value was. This is an important issue for lossy compression.

5. There is an error within the image that has the caption of "3-bit resolution with eight levels.". You will notice that the upper-most two levels are both currently labelled with '110'. The correct label for the upper-most level should be '111'. — Preceding unsigned comment added by KorgBoy (talkcontribs) 21:28, 10 July 2014 (UTC)

—Preceding unsigned comment added by 70.187.205.90 (talkcontribs) 02:37, 30 January 2006

Definition of floor function[edit]

To stress that the transition from continuous to discrete data is achieved by the floor function, it might be useful to require to be continuous. Additionally, I think

  • is the floor function, yielding the integer

is confusing, it may be better to use or instead of .

--134.109.80.239 14:50, 11 October 2006 (UTC)

I liked your suggestion, and just put it into the article. -SudoMonas 17:22, 13 October 2006 (UTC)

Incorrect statement about quantization in nature[edit]

This page incorrectly stated that at a fundamental level, all quantities in nature are quantized. This is not true. For example, the position of a particle or an atom is not quantized, and while the energy of an electron orbiting an atomic nucleus is quantized, an electron's energy in free space is not quantized. I have changed the word "all" to "some" in the text to correct the false statement, but a much more optimal revision could be made.

71.242.70.246 18:03, 12 May 2007 (UTC)

Agree. I find that the whole section is unrelated to quantisation in signal processing, and it hasn't been edited in years. I've decided to delete the whole section. C xong (talk) 04:06, 31 March 2010 (UTC)

pi and e[edit]

" For example we can design a quantizer such that it represents a signal with a single bit (just two levels) such that, one level is "pi=3,14..." (say encoded with a 1) and the other level is "e=2.7183..." ( say encoded with a 0), as we can see, the quantized values of the signal take on infinite precision, irrational numbers. But there are only two levels. "

How you will build, test and prove that?

How you will measure "pi" and "e" levels?


P. Petrov —Preceding unsigned comment added by 78.90.230.235 (talk) 18:59, 20 March 2010 (UTC)

The example is poorly written, but its premise is correct. Infinite precision is possible only in theory, so it cannot be tested in practice. The example could be better worded. C xong (talk) 04:09, 31 March 2010 (UTC)

OK I admit that the example is poorly stated :) The reason for such an example is the condition I have seen of the prior editors, having an opinion biased towards " a quantizer should have integer (like 1,2,3) or at least rational fraction wise (like 0.25,0.50,0.75) output values." This illusive bias is a mere and natural result of using computers for digital signal processing and inputing analog signals with soundcards for practical applications of quantizers within ADC of such devices. It is true , that a compuer needs fractional numbers that can be exactly representable within finite N-bits of binary resolution (either in integer format or in floating point formats). BUT a quantizer is something else. Its main function is to map a range of uncountable/countabe things into a set of countable ones with a much smaller number in case of countable to countable mapping. Weather those things have numerical values or not is a secondary issue and even if that numerical values have integer,real or rational values is completely irrelevant from a quantizer's point of view. —Preceding unsigned comment added by 88.226.19.210 (talk) 14:55, 2 April 2011 (UTC)

Page is Mature Enough?[edit]

I did my best to bring the premature page of Quantization(Signal Processing) to an acceptable state. Now it has all the necessary definitions and mathematical explanations. It still lacks alot though. For example : - good graphs for quantizer I/O maps ( I cannot add, since I am not a member) - graphs for companding functions - a few numerical examples - adaptive quantization details - further decoration of the topics - application examples

I might add as much in the future, NEVERTHELESS, this page is acceptable now. we may get rid of the banner on top. —Preceding unsigned comment added by 88.226.92.114 (talk) 19:49, 6 April 2011 (UTC)

Maybe there is a new banner, but right now it says the article needs additional citations, and I think that is true.Constant314 (talk) 22:25, 6 April 2011 (UTC)

This article has substantial problems. I am not even convinced that the edits over the last month or so have been improvements.

  • The article says it is a summary of what is in some book (Introduction to Data Compression, K. Sayood, M.Kaufmann). That does not seem proper for Wikipedia and I don't think that is actually true, based on the edit history (although I don't have a copy of that book to be able to say for sure).
  • It has a substantial number of grammatical and formatting problems and spelling errors.
  • It should be about quantization in general, but it now seems to be exclusively about scalar quantization.
  • It isn't quite correct in various places.
  • It no longer even contains a definition of what quantization is.

SudoMonas (talk) 17:03, 8 April 2011 (UTC)

So you mean according to your standards the previous stage was better, then I will revert back my edits.

=> I tried but it is too tiring to revert back, you should do it to the date "15 February 2011" that was the prior date I began my edits. —Preceding unsigned comment added by 88.224.91.218 (talk) 21:33, 8 April 2011 (UTC)

Looking back, I don't really think that the shortcomings that I see in the article are your fault, and you made your edits in good faith, so I will not revert them. I think the article wasn't very good back in February either. I guess I should stop complaining and just try to help contribute to make the article better. —SudoMonas (talk) 01:17, 9 April 2011 (UTC)

I like what you have done with the introduction.Constant314 (talk) 22:57, 9 April 2011 (UTC)
While you are at it, you may want to eliminate the use of first person plural in favor of third person. Example: "After defining these two performance metrics for the quantizer, we can express a typical Rate–Distortion formulation for a quantizer design problem in one of two ways: " could be rewritten as "After defining these two performance metrics for the quantizer, a typical Rate–Distortion formulation for a quantizer design problem can be expressed in one of two ways: "Constant314 (talk) 13:43, 12 April 2011 (UTC)


The new state of the page[edit]

The following represents my personal point of view.

OK now after several enhancements, corrections, modifications and additions, the page seems not any better ? :)) why so ?

1- whole page seems not relating to the main topic (Quantization - "Signal Processing" ?) It seems mainly about quantization in "mathematics","communication systems" and "source coding" (data compression), however the sole purpose of quantization for signal processing is simply the representation of analog signals by digital ones. This point is almost not discussed. That is a practical point of view (ADC/DAC,Data Acquisition, Instrumentation) I think that this point must be stressed. There are many considerations, input signal conditioning, clipping,distortions, AGC, dynamic range modifications, loading factor calculations, independent noise assumptions, SQNRdB calculations, input types and their effects on the resulting signal fidelity...

2- modifications dont work: The previous stage was based on my very personal style. I like personal writing :). And it doesnt fit wiki, but your (SudoMonas) modifications now are too much constrained and limited by that previous things. And that creates too frequent style-mismatches which make it difficult to read. lets consider writing this page from "scratch" :)), so that it at least becomes consistent in terminology and style.

3- it seems boring without actual examples, applicaitons and figures.

4- and it is quite long now.

now a few suggestions.

1- That whole Rate-Distortion based mathematical stuff shall either be omitted or be moved to a proper place. Analysis and Design of a quantizer shall better be treated separate than its definitions, types, usages and properties.

2- Shorter is better ! : at various places too lengthy explanations pervade.(some of them belonging to me) Even the first few sentences are unnecessarily (almost redundantly like this one) long, what is wrong with saying => "Quantization is the process of mapping large set of input values to a much smaller set" conscise, compact, and if any ambiguity happens (definetely it does) it can always be expanded and clarified on the what follows, instead of inside a single sentence.

3- Quantization in Signal Prcessing, Mathematics, Communications and Source Coding have quite different purpose/type of usages. Therefore they shall better be treated separately.

in Signal Processing => ADC/DAC characterizations,binary data representation formats, rounding, rounding in matlab/C, rounding in IEEE floating point formats, input signal conditioning, independent quantization noise assumption and its effcets on outputs, spectral noise shaping via noise feedback applications, input loading factors. quantizer resolution wrt bit-size, relations to sampling rate. It is very natural to consider quantization with sampling in here.

in Communications => telephone lines, PCM, DPCM, ADPCM, Delta Modulation, Nonuniform Max-Lloyd and Adaptive Qauntizers, a-low, u-low Companders. the ones employed in codecs likes ITU-T G.723,G.726,G.722 standarts.

in Source Coding => rate distorion based encoder-decoder design, vector quantization , Psychoacoustic/Psychovisual facts for shaping the design, quantizers used in JPEG, MPEG-audio, H.263/4 would show some nice examples. —Preceding unsigned comment added by 88.226.198.117 (talk) 23:52, 13 April 2011 (UTC)

I think it is some better. My perception is that it jumps into specialized math too quickly. My thoughts are that roughly the first half ought to be descriptive and qualitative with simple examples and only a few simple equations and should only be about uniform step size quanitization. Then the second half could have all that math. First anything to do with the uniform quantitizer then the others.Constant314 (talk) 18:13, 14 April 2011 (UTC)
Upon further reflection, I think this article should deal only with uniform quantization and the other types moved to their own pages.Constant314 (talk) 18:24, 14 April 2011 (UTC)
I just noticed these comments – some further edits have been done since those comments were made. I just included the suggestion regarding the simplification of the first sentence. As you have probably seen, I have just started at the beginning and have been trying to improve what I saw from paragraph-to-paragraph as I moved forward. It's true that this is an incremental approach. I haven't yet gotten to the later sections or really attempted any significant restructuring or added substantial new topics. I had planned to get to some of that, but hadn't yet had time. The rate-distortion and Lloyd-Max material was already there – I have only refined them. I certainly think that the article has been getting substantially more correct and that there has been some improvement in the logical flow, consistency, notation, and referencing. In my opinion, quantization for source coding and communication are within the scope of signal processing. Of course, I have been the one doing the recent edits, so I may not be perfectly objective about them. –SudoMonas (talk) 19:12, 14 April 2011 (UTC)
I think you are making improvements. I don't know how this article got to where it was. It looked like two guys who knew a lot about the subject were in a contest to see who could add the most stuff. Regarding "quantization for source coding and communication are within the scope of signal processing" I agree but that is no reason why they could not have their own pages with a link in this page.Constant314 (talk) 21:46, 14 April 2011 (UTC)

1- well, first of all there are certainly improvements: At the very beginning, once upon a time, quantization was described almost like rounding to integer. Now it is definetely better.

2- The fundamental problem results from the fact that while doing my edits, I thought it would be a good idea to start from the most general, rate distortion based case and move on to the specific cases as special examples (a rather logical axiomatic approach). Now I think that is not good. It seems better to go, as Constant314 points, from simpler uniform quantizer to more general cases. For me it is definetely better in this state, from general theory to specific examples. But I guess most people visiting this page have no idea about either entropy or rate distortion theory and for those people (the majority) it is difficult to read in this fashion.

3- There are no different quantizers for signal processing, communiciation or source coding. However the application target and hence the constraints may get radically different. For example dithering has no meaning in source coding while it is a useful tool for image/audio post-processing. For most DSP applications, for example, due to practical CPU architectures, FLC is used, that is the natural machine arithmetic and machine word size. It would be difficult to use entropy techniques there. As all these are different application constraints on the same general problem, that is why I assume treating them separateley would be better. By the way, my edits were geared towards source coding and scalar quantization in particular. SudoMonas seems to have vector quantization, VQ, basis. That "classification of input" argument, instead of simply calling them decision intervals, has very little meaning and significance for a scalar quantizer, although it is understandable for pattern recognition or vector quantization. I strongly suggest avoiding a mixture of VQ and SQ. It would be much better to treat VQ in a separate brand new and free page.

4- Since quantization is a vast subject there is no last word on it. Anybody who knows about it would like to add an extra paragraph of his own. Expanding some vague overly compressed definitions, giving a more unambigious description, adding a new point of view or some application examples...And that would make this page too long. I guess only those necessary and sufficient explanations should be included.

5- finally, I am not in a contest, as suggested by Constant314. I am not puting anything new. Possibly I wont either. I wish good luck for the remaining editors.

—Preceding unsigned comment added by 88.224.26.202 (talk) 12:20, 15 April 2011 (UTC) 
Re your #5: Sorry I don't mean that you were in a contest. I think it was that condition before you started working on it.
Re your #3: Your approach of general to specific would appeal to mathematicians, which few readers are.Constant314 (talk) 13:25, 15 April 2011 (UTC)

Since these further comments, I have done various things to try to simplify the presentation. I have tried to restrict the introduction section to basic ideas and applications without getting into detailed equations. I have also moved more of the simpler uniform quantization discussion up before the discussion of rate-distortion optimization. (I agree that the axiomatic approach was a bit too tough for most readers.) I have substantially condensed and simplified much of the material after the rate-distortion and Lloyd-Max sections and removed some of the unreferenced material that seemed confusingly written, overly mathematical, and in some cases not especially noteworthy. There is already a separate article on VQ, and it is linked near the beginning of the article. I am becoming reasonably satisfied with the article, although I do still plan some further refinements. —SudoMonas (talk) 01:23, 20 April 2011 (UTC)

I'd like to see the Quantization Noise sub section reinstated. People do sometimes analize quantization error as noise and sometimes that is OK and sometimes it yields wrong answers.Constant314 (talk) 17:07, 21 April 2011 (UTC)
Excellent suggestion – although I think that the material that was previously in the article on that subject was not such a good presentation of the subject. If someone else doesn't do it, I'll add some discussion of that topic soon. —SudoMonas (talk) 21:58, 21 April 2011 (UTC)
You are doing fine. I would suggest that you use PDF instead of pdf and that you write it out fully at least the first time in every section.Constant314 (talk) 17:38, 22 April 2011 (UTC)
Thanks. I just inserted a section about the additive noise model. Regarding pdf, I put some changes in the article to improve that aspect, although not exactly as suggested. According to the PDF (disambiguation)#In science and probability density function pages (and my personal experience), the usual abbreviation uses lowercase letters. To me (and I think to most people), PDF refers to the file format, and that assumption is reflected in the Wikilink redirect on the PDF page. In the article modification, I defined the abbreviation in parenthesis in the first place where it is used in the article and put Wikilinks in the first use in each other section. In some places, defining the term in parentheses might mix with math formulas that immediately follow the term. —SudoMonas (talk) 20:22, 22 April 2011 (UTC)
LOL, I have just the opposite reaction: I think pdf is a file type and PDF is an acronym.Constant314 (talk) 16:01, 23 April 2011 (UTC)

Mid rise, Mid tread, mu-law, A-law[edit]

I cannot find a reference right now, but my recollection is the mu-law was mid rise and A-law was mid tread which means the slightest noise causes the mu-law device to toggle between two states while the A-law does not. Thus, a mu-law circuit transmits noise where an A-law would not. This got to be a marketing issue over who had the quietest network. Manfacturers started adding a half a bit bias to the mu-law encoders to get a circuit so quiet that "you could hear a pin drop". Anyway, you may want to work mid-rise, mid tread into the section on mu-law and A-law or maybe work the A-law, mu-law into the mid-rise, mid-tread section.Constant314 (talk) 16:14, 23 April 2011 (UTC)

Clipping[edit]

BarrelProof asserts that Clipping (signal processing) needs to mentioned as a source of quantization. Hopefully he'll explain why here. ~KvnG 19:11, 3 December 2013 (UTC)

To be more precise, I assert that clipping should be mentioned as a source of quantization error, not as a source of quantization itself. The sentence in question concerns the sources of error in analog-to-digital conversion (in the lead section of the article). In practice, there can be several sources of error in practical analog-to-digital converters, including such sources as analog circuitry nonlinearity, analog noise, etc., but we can neglect most of those in an idealized model. The term "analog-to-digital conversion" generally refers to the application of uniform quantization with a finite number of levels (e.g., using a 10 bit or 12 bit a/d converter, thus having 1024 or 4096 distinct representable values). In such an operation, there are basically two sources of error – granular distortion and clipping distortion (where clipping distortion is also known as "overload distortion"). Both kinds of distortion are discussed in sections in the article, and I don't understand why neglecting one of them in the lead section would be desirable. See, for example, Quantization (signal processing)#Granular distortion and overload distortion. Clipping is something that definitely does occur in practice. If clipping distortion was not a concern, one could just amplify the gain at the input and thus drive the granular distortion to zero and there would be no distortion. Clipping/overload can be a major source of the error introduced by a quantizer. —BarrelProof (talk) 20:14, 3 December 2013 (UTC)
The statement we're discussing is, "The difference between the actual analog value and quantized digital value is called quantization error or quantization distortion. This error is either due to rounding, truncation or clipping." The section you link to talks about overload distortion being caused by clipping. I can go along with that. Your proposed edit makes the claim that clipping causes quantization error or quantization distortion. I can't go along with that. ~KvnG 21:11, 3 December 2013 (UTC)
Actually, that wasn't the exact wording, but I suppose that doesn't matter for purposes of this discussion. My definition of "quantization error" is that it is any error introduced by a quantizer – i.e., any error introduced by conversion of a continuous-domain input signal (or an uncountable input domain or a countable input domain with a larger set of countable values) to a countable output representation. Do you disagree with that definition? —BarrelProof (talk) 21:26, 3 December 2013 (UTC)
Sorry if I misquoted your proposal. I did include the link to the diffs for those who want to go to the horse's mouth.
I believe quantization error, distortion or noise refers only to the error between steps. Here are some refs which corroborate. None of these discuss an overload element and I didn't find any that did (I looked): [1], [2], [3], [4], [5], [6], [7], [8]
Do you have a citation for a definition which includes overload? ~KvnG 21:53, 3 December 2013 (UTC)
Here is a classic one that is already cited in the article: The paper by Joel Max, "Quantizing for Minimum Distortion" (1960). It says "The difference between input and output signals, assuming errorless transmission of the digits, is the quantization error. ... one has to use a quantizer which sorts the input into a finite number of ranges, N." He then computes the mean-square quantization error by performing an integration of the pdf over the full range of the input signal from minus infinity to infinity (just above equation 1), while keeping the number of reconstruction values N as a finite constant. Since N is constant and finite, the (infinite-extent) integration range includes the error introduced both by granularity and overload. Does that suffice?
To me, it seems rather self-evident that "quantization error" or "quantization distortion" should be interpreted as referring to (all of) the error/distortion introduced by quantization – which should include all sources of such error (both granularity and overload). While some authors may provide simplified presentations that neglect to discuss overload, and while there may not be any overload distortion in some applications (e.g., if the signal has a known finite input range and the quantizer gain is set to cover that entire range), when there is overload in the quantization operation, the error introduced by the overload is part of the quantization error/distortion. If you want to refer to only the granular element of the error, then the appropriate term is "granularity error", but the "quantization error" properly/generally should include all error induced by the quantization operation.
The sources that you cited seem to generally not even consider the topic of clipping/overload distortion. They seem to mostly be less scholarly, simplified discussions of the topic. Here's an alternative challenge: Can you find any sources that actually include a discussion of clipping/overload in any significant detail and do not include it within the scope of their definition of "quantization error" or "quantization distortion"?
BarrelProof (talk) 22:17, 3 December 2013 (UTC)
I took a quick look at the sources at Clipping (signal processing) and Clipping (audio) and didn't find what you're looking for. I don't have access to the paper you cite above. Hopefully another editor will join the conversation and help get us unstuck. ~KvnG 23:19, 3 December 2013 (UTC)
The detailed scholarly survey paper "Quantization", by Gray and Neuhoff (1998), is the most extensively cited paper in the article (cited in eight <ref> tags). Its first paragraph defines "quantization error e = q(x) − x", where q(x) (which is defined in Equation #1) is the quantization function that maps input values to output representations. Figure #1 shows the input signal range of x covering from minus infinity to infinity. In the second paragraph they discuss the "granular region" of the quantizer and the "overload or saturation region" (italics in the source) and they say that this outer region is "where the quantizer error is unbounded". There is lots of further analysis of overload in the paper, but that much should be sufficient to make it very clear that their definition includes the distortion in the overload region as part of the quantization distortion (which is minimized in various ways by techniques described in the paper). When deriving the usual approximation for the mean-square quantization error associated with a uniform scalar quantizer with a step size of (at the top of page 2344), they say that this approximation is for "when overload distortion can be ignored". —BarrelProof (talk) 00:29, 4 December 2013 (UTC)
I also just looked at another classic paper that's referenced in the article – "Quantization" by Gersho (1977). It is very similar to the other two in that regard. To save myself some time, I won't bother to quote exact text to prove that in detail, although it would be pretty easy. As one example, I refer to its Figure 4, which is a diagram of quantization error as a function of signal value. At the left and right extremes of the input signal range, it shows what is clearly clipping error effects. That paper also contains extensive discussion of granular error and overload error (and discussion of total quantization noise as being a combination of the two effects). —BarrelProof (talk) 01:47, 4 December 2013 (UTC)

Figure illustrating sampling[edit]

I find the top figure of the article showing quantization quite confusing. First of all it seems to show the entire chain of analog-to-digital and digital-to-analog conversion. The graph is interesting but stands in conflict with the typical illustrations in textbooks, like e.g. the plot titled "Original and Quantized Signal". It is furthermore not clear what sampling scheme for analog-to-digital and what interpolation scheme for digital-to-analog conversion has been used. I would suggest to remove the figure or to move it further down the page with an appropriate description. Sascha.spors (talk) 15:15, 16 January 2015 (UTC)

I am going to guess that you are more comfortable with the stair-step representations in the figures further down. The problem with those is that they do not take realistic signal reconstruction into account and so don't give and accurate representation of the error induced. On thing that is making things difficult here is that neither the caption nor the legend indicates that the black dots are the quantized signal. I have updated the caption to try an help with this. ~KvnG 14:45, 19 January 2015 (UTC)
I completely agree with Sascha on this. Focusing on the first figure, it seems to be primarily a depiction of periodic sampling, not quantization. Upon very close and careful inspection, there is quantization evident in the amplitudes illustrated in that figure, but that is not something anyone would ordinarily notice without staring at the figure for a very long time. If we want to illustrate (scalar) quantization, one axis should show the (continuous-domain) input value to a quantizer and the other axis should show the corresponding quantized output value – i.e., we should have a figure that looks roughly like a staircase with a constant rise-to-run ratio – like the first figure in the linked article by Widrow. This does not refer to figures of the sort referred to above as "stair-step representations". In all five of the figures in the current article, the horizontal axis seems to be showing time or frequency, which are basically irrelevant to explaining the concept of quantization. Quantization is something that is done to all sorts of numerical values. An illustration of two-dimensional VQ would also be nice to include in the article. —BarrelProof (talk) 18:51, 19 January 2015 (UTC)
The title is Quantization (signal processing), so I don't see anything wrong with illustrations of "signals" (i.e. amplitude vs time). The new caption is very good. And of course the full-sized picture is better than the thumbnail pic.
--Bob K (talk) 13:19, 21 January 2015 (UTC)
"Signals" are not, in general, restricted to amplitude versus time. An obvious case is a photographic image. Image processing is certainly signal processing, but the set of digital color samples that represents a photograph has no time domain. Information in a transformed domain is also a "signal". See, for example, the article Moura, J.M.F. (2009). "What is signal processing?, President's Message". IEEE Signal Processing Magazine. 26 (6). doi:10.1109/MSP.2009.934636.  , which explicitly says that the assumption that a signal needs to vary with time (or space) is only an assumption that applied "ages ago". —BarrelProof (talk) 20:41, 22 January 2015 (UTC)
So changing "time" to "x" would solve your problem? Is that what we're really talking about?
--Bob K (talk) 19:47, 23 January 2015 (UTC)
No. What I think we're talking about is the desirability of adding a figure illustrating the input-output function for a scalar quantizer, roughly like the five figures specifically identified as examples below. —BarrelProof (talk) 20:09, 23 January 2015 (UTC)
"...stands in conflict with the typical illustrations..." What do you mean by this exactly? You think a different type of graph is more appropriate or there is something incorrect about the figure? Radiodef (talk) 20:05, 22 January 2015 (UTC)
I believe the "typical illustration" this refers to is one roughly like the first figure in the 1961 Widrow article that is cited with a PDF link in the article. Such figures are found in many publications about quantization – I only mention that one because it is so easily accessible. Figures 1 and 2 in the cited 1998 article "Quantization" by Gray and Neuhoff are similar. Also Figures 1 and 2 in the cited 1977 article "Quantization" by Gersho. The accompanying quantization error (figure 4) in the Gersho article is also nice. —BarrelProof (talk) 20:41, 22 January 2015 (UTC)
I think I understand the concern now. An input vs. output transfer function diagram as you suggest would be interesting to try. If readers are able to get their heads around it, it is the most direct way to represent quantization. Plots like this are being used successfully in Dynamic range compression. ~KvnG 16:12, 27 January 2015 (UTC)

Rounding example for half-step values[edit]

In the basic rounding example describing simple (mid-tread) rounding as an example of quantization, someone changed it to use the tie-breaking rule of rounding upwards always (towards infinity) for half-step input values. My impression is that this is not the usual definition of rounding that most people are familiar with. I reverted the change, saying "In the usual definition of rounding, the value −0.5 should be rounded to −1, not 0." My revert was then reverted by someone saying "The formula was correct. +0.5 and -0.5 should both round in the same direction (positive for example), otherwise there is a statistical bias away from zero." While it is true that rounding away from zero creates a statistical bias away from zero, I believe it is the most appropriate example to use here, for several reasons:

  • It is the most common form of rounding that is commonly taught and used in practice (e.g., by schoolchildren and accountants). The general public is not familiar with other rounding rules.
  • It is what is typically built into software (such as Microsoft Excel and other general-purpose software).
  • As noted in the Rounding article, "This method treats positive and negative values symmetrically, and therefore is free of overall bias if the original numbers are positive or negative with equal probability." Typical pdfs such as the Gaussian (a.k.a. Normal) and Laplacian pdfs have that property.
  • Also as noted in the Rounding article, "It is often used for currency conversions and price roundings (when the amount is first converted into the smallest significant subdivision of the currency, such as cents of a euro) as it is easy to explain by just considering the first fractional digit, independently of supplementary precision digits or sign of the amount (for strict equivalence between the paying and recipient of the amount)."
  • It seems generally desirable for a mid-tread quantizer to have symmetric behavior around zero.

Having an overall upward bias toward infinity seems no better than having an outward bias away from zero – at least an outward bias will average to zero for symmetric input. Both types of rounding have some disadvantages, but there is nothing "incorrect" about rounding away from zero, and that's the type of rounding most people are familiar with. —BarrelProof (talk) 20:05, 4 December 2015 (UTC)

You make good points. This article is about quantization rather than rounding. Always rounding positive creates a positive bias which is often unimportant or easily removed (by capacitive coupling in electronics). Biasing away from zero creates a non-linearity near zero which is more difficult to deal with. In the quantization of analog signals, zero is not a special number and so you want to avoid biasing away from it.Constant314 (talk) 20:27, 4 December 2015 (UTC)
Upon further reflection, it might be beneficial to include both formulas and an explanation of the difference. Constant314 (talk) 20:37, 4 December 2015 (UTC)
It might, but if we're trying to provide a familiar example, the previously described form of rounding is the most common and familiar, so I think it is the most important one to include. —BarrelProof (talk) 21:33, 6 December 2015 (UTC)
The article is about quantization of signals (whether it's A/D conversion or bit depth reduction). A/D conversion has enough slop that you could never tell exactly where the "tie-breaking" point is. Some DSPs have "convergent rounding" in which they will round to the nearest even if the value is precisely midway between quantization levels. Sometimes we just lob off the bits on the right which is the floor function. In no case anywhere does any bit depth reduction round negative values in the other direction as positive. This language with sgn() and abs value is confusing and superfluous and does not belong in the article at all. 173.48.62.104 (talk) 18:23, 7 December 2015 (UTC)
I agree mostly that for signal processing rounding is toward positive. I'm not sure that it never happens otherwise but the formula showing always rounding positive is the most appropriate for this article. Constant314 (talk) 18:56, 7 December 2015 (UTC)
This may depend somewhat on whether you consider things like data compression (e.g., image, video, and audio compression) to be signal processing. I do. JPEG and JPEG 2000 encoders, for example, typically use symmetric rounding around 0. Some of what is in the article (e.g., Rate–distortion quantizer design and Lloyd–Max quantization) is about the principles of quantization for compression purposes. Many of the cited sources are academic papers that discuss usage for compression applications. —BarrelProof (talk) 21:49, 7 December 2015 (UTC)
I have no opposition to having both formulas. There is plenty of room. Constant314 (talk) 23:30, 7 December 2015 (UTC)
OK, as you have probably noticed, I just expanded the dead-zone discussion to cover the symmetric case, and included consideration of arbitrary dead-zone widths. —BarrelProof (talk) 05:36, 8 December 2015 (UTC)
Interesting. Is there a more important use than noise gate or squelch? Constant314 (talk) 12:55, 8 December 2015 (UTC)
Just for the record, "always rounding positive" has been my general experience. Heuristically, round[s(t)+1]=round[s(t)]+1 seems more useful than mag[round[s(t)]]=round[mag[s(t)]], since addition is linear, and abs value is not (FWIW). But I agree that if both methods are found in practice, neither should be excluded. And if we can produce a list of the types of applications where each is likely to be used, and reasons why, that would be ideal.--Bob K (talk) 13:25, 8 December 2015 (UTC)
I am in complete agreement that any method found in practice (and it would be nice to have that cited and verified eventually) should be included here. So if those dead-zone quantizers have a real application somewhere, they should be included. Perhaps also the convergent rounding where round(n+1/2) goes to the n or n+1 that is even. But within scope and topical limits. Should noise-shaped quantizers be included? Even the simple "fraction saving" where bits lobbed off of a samples are zero-extended and added into the next sample before truncation? So far this has only memoryless quantizers. Maybe it should have only memoryless quantizers. 173.48.62.104 (talk) 14:05, 8 December 2015 (UTC)

External links modified[edit]

Hello fellow Wikipedians,

I have just modified one external link on Quantization (signal processing). Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}).

Question? Archived sources still need to be checked

Cheers.—InternetArchiveBot (Report bug) 12:37, 21 July 2016 (UTC)