Jump to content

Universal code (data compression): Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
No edit summary
No edit summary
Line 1: Line 1:
In [[data compression]], a '''universal code''' for integers is a [[prefix code|prefix-free code]] that maps the positive integers onto self-delimiting binary codewords, with the additional property that whatever the true [[probability distribution]] on integers, the [[expected value|expected]] lengths of the codewords are within a constant factor of the expected lengths that the [[optimal code]] for that probability distribution would have assigned. A universal code is ''asymptotically optimal'' if the ratio between actual and optimal [[expected value|expected]] lengths is bounded by a function of the [[information entropy]] of the code that, in addition to being bounded, approaches 1 as entropy approaches infinity.
In [[data compression]], a '''universal code''' for integers is a [[prefix code|prefix-free code]] that maps the positive integ[[Image:Ceiling cat 00.jpg]]ers onto self-delimiting binary codewords, with the additional property that whatever the true [[probability distribution]] on integers, the [[expected value|expected]] lengths of the codewords are within a constant factor of the expected lengths that the [[optimal code]] for that probability distribution would have assigned. A universal code is ''asymptotically optimal'' if the ratio between actual and optimal [[expected value|expected]] lengths is bounded by a function of the [[information entropy]] of the code that, in addition to being bounded, approaches 1 as entropy approaches infinity.


In general, most prefix-free codes for integers assign longer codewords to larger integers. Such a code can be used to efficiently communicate a message drawn from a set of possible messages, by simply ordering the set of messages by decreasing probability and then sending the index of the intended message.
In general, most prefix-free codes for integers assign longer codewords to larger integers. Such a code can be used to efficiently communicate a message drawn from a set of possible messages, by simply ordering the set of messages by decreasing probability and then sending the index of the intended message.

Revision as of 09:53, 22 June 2006

In data compression, a universal code for integers is a prefix-free code that maps the positive integFile:Ceiling cat 00.jpgers onto self-delimiting binary codewords, with the additional property that whatever the true probability distribution on integers, the expected lengths of the codewords are within a constant factor of the expected lengths that the optimal code for that probability distribution would have assigned. A universal code is asymptotically optimal if the ratio between actual and optimal expected lengths is bounded by a function of the information entropy of the code that, in addition to being bounded, approaches 1 as entropy approaches infinity.

In general, most prefix-free codes for integers assign longer codewords to larger integers. Such a code can be used to efficiently communicate a message drawn from a set of possible messages, by simply ordering the set of messages by decreasing probability and then sending the index of the intended message.

These are some universal codes for integers; an asterisk (*) indicates a code that can be trivially restated in lexicographical order, while a double dagger () indicates a code that is asymptotically optimal:

These are non-universal ones:

Relationship to practical compression

Huffman coding, range encoding, and arithmetic encoding (when they can be used) give at least as good, and often better compression than any universal code.

However, universal codes are useful when Huffman coding cannot be used — for example, when one does not know the exact probability of each message, but only knows the rankings of their probabilities.

Universal codes are also useful when Huffman codes are inconvenient. For example, when the transmitter but not the receiver knows the probabilities of the messages, Huffman coding requires an overhead of transmitting those probabilities to the receiver. Using a universal code does not have that overhead.

Each universal code, like each other self-delimiting (prefix-free) binary code, has its own "implied probability distribution". If a set of messages happens to have a probability distribution similar enough to that implied by some universal code, then the ideal Huffman code or arithmetic code for that set of messages will be equivalent to that universal code. Since universal codes are simpler and faster to encode and decode than Huffman codes (which is, in turn, simpler and faster than range encoding and arithmetic encoding), the universal code would be preferable in cases where the codes are equivalent. [2]

For Rice codes and other Golomb codes, the implicit probability distribution is a geometric distribution (an exponential distribution on integers) with mean . For universal codes, the implicit distribution is approximately a power law such as . For the Fibonacci code, the implicit distribution is , with

where is the golden ratio. For the ternary comma code (i.e., encoding in base 3, represented with 2 bits per symbol), the implicit distribution is a power law with .

The figures below compare Elias Gamma, Elias Delta, Fibonacci, and various Rice codings for bits used to encode integer values. A baseline, "direct binary", for binary encoding where the size is already known is also included.

Fibonacci, Elias Gamma, and Elias Delta vs binary coding
Rice with k=2,3,4,5,8,16 vs binary

References