Talk:Context-adaptive binary arithmetic coding

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Strange phrasing in this article...[edit]

The section entitled "The arithmetic decoding engine" makes sense, but it's later in the article.

Starting from the beginning... saying "It encodes binary symbols" is kind of nonsense sounding when we're talking about a digital file format. That's the immediate impression in the context. The algorithm section doesn't do much to clarify this.


The term "binarization" makes more sense in the context of the IEEE paper, I suppose, although I can't find that as a common usage and it isn't used correctly in this article. Quantization would have made perfect sense here. The article says The selected context model supplies two probability estimates: the probability that the bin contains “1” and the probability that the bin contains “0”. . No, not really. The context model supplies a method of determining which 1/x dictionary options a given probability is encoding for in an encoded stream, or which probability to encode given a dictionary entry transformed into a numerical form that can be encoded more optimally with this method. That's still not a great way of saying it, but somewhat more accurate. That paper and others(1)(2) have clearer explanations, one of which is that by quantizing the original data into groups that only vary in less significant bits general probability bins for the most significant portions can be arranged in an easier to decode stream that's able to cover a min/max range of data. PAQ1 focuses on a similar predictive approach.

It's less of an issue and would require far more research, but I'd also suggest that the mention of being "multiplication free" isn't relevant anymore. This claim was apparently kind of essential to most of the patents on arithmetic coding which were still in full force at the time and the patent claims either had to be backed up or a suitable variant provided to avoid legal issues depending on whether the people using arithmetic coding had licensed an existing patent or not. The patents IBM kept getting for this were all "reduced instruction" variants of their original patent of the compression itself, and were valid for long after compilers would have been eliminating multiplications from that code in their normal optimization process... It's interesting from a historical point of view, but claiming that as some kind of a major feature of anything wasn't novel in the late 80s let alone 2003. It was more or less a requirement for doing anything video related at a good speed, for example, and I'm sure someone else can date it back much farther than that. A Shortfall Of Gravitas (talk) 10:40, 5 September 2018 (UTC)[reply]