Universal code (data compression): Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
In [[data compression]], a '''universal code''' for integers is a [[prefix code|prefix-free code]] that maps the positive |
In [[data compression]], a '''universal code''' for integers is a [[prefix code|prefix-free code]] that maps the positive integ[[Image:Ceiling cat 00.jpg]]ers onto self-delimiting binary codewords, with the additional property that whatever the true [[probability distribution]] on integers, the [[expected value|expected]] lengths of the codewords are within a constant factor of the expected lengths that the [[optimal code]] for that probability distribution would have assigned. A universal code is ''asymptotically optimal'' if the ratio between actual and optimal [[expected value|expected]] lengths is bounded by a function of the [[information entropy]] of the code that, in addition to being bounded, approaches 1 as entropy approaches infinity. |
||
In general, most prefix-free codes for integers assign longer codewords to larger integers. Such a code can be used to efficiently communicate a message drawn from a set of possible messages, by simply ordering the set of messages by decreasing probability and then sending the index of the intended message. |
In general, most prefix-free codes for integers assign longer codewords to larger integers. Such a code can be used to efficiently communicate a message drawn from a set of possible messages, by simply ordering the set of messages by decreasing probability and then sending the index of the intended message. |
Revision as of 09:53, 22 June 2006
In data compression, a universal code for integers is a prefix-free code that maps the positive integFile:Ceiling cat 00.jpgers onto self-delimiting binary codewords, with the additional property that whatever the true probability distribution on integers, the expected lengths of the codewords are within a constant factor of the expected lengths that the optimal code for that probability distribution would have assigned. A universal code is asymptotically optimal if the ratio between actual and optimal expected lengths is bounded by a function of the information entropy of the code that, in addition to being bounded, approaches 1 as entropy approaches infinity.
In general, most prefix-free codes for integers assign longer codewords to larger integers. Such a code can be used to efficiently communicate a message drawn from a set of possible messages, by simply ordering the set of messages by decreasing probability and then sending the index of the intended message.
These are some universal codes for integers; an asterisk (*) indicates a code that can be trivially restated in lexicographical order, while a double dagger (‡) indicates a code that is asymptotically optimal:
- Elias gamma coding *
- Elias delta coding * ‡
- Elias omega coding * ‡
- Exp-Golomb coding *, which has Elias gamma coding as a special case
- Fibonacci coding ‡
- Levenstein coding * ‡, the original universal coding technique [1]
- Byte coding ‡, also known as comma coding, where a special bit pattern (with at least two bits) is used to mark the end of the code — for example, if an integer is encoded as a sequence of nibbles representing digits in base 15 instead of the more natural base 16, then the highest nibble value (i.e., a sequence of four ones in binary) can be used to indicate the end of the integer.
These are non-universal ones:
- Golomb coding *, which has Rice coding and unary coding as special cases
Relationship to practical compression
Huffman coding, range encoding, and arithmetic encoding (when they can be used) give at least as good, and often better compression than any universal code.
However, universal codes are useful when Huffman coding cannot be used — for example, when one does not know the exact probability of each message, but only knows the rankings of their probabilities.
Universal codes are also useful when Huffman codes are inconvenient. For example, when the transmitter but not the receiver knows the probabilities of the messages, Huffman coding requires an overhead of transmitting those probabilities to the receiver. Using a universal code does not have that overhead.
Each universal code, like each other self-delimiting (prefix-free) binary code, has its own "implied probability distribution". If a set of messages happens to have a probability distribution similar enough to that implied by some universal code, then the ideal Huffman code or arithmetic code for that set of messages will be equivalent to that universal code. Since universal codes are simpler and faster to encode and decode than Huffman codes (which is, in turn, simpler and faster than range encoding and arithmetic encoding), the universal code would be preferable in cases where the codes are equivalent. [2]
For Rice codes and other Golomb codes, the implicit probability distribution is a geometric distribution (an exponential distribution on integers) with mean . For universal codes, the implicit distribution is approximately a power law such as . For the Fibonacci code, the implicit distribution is , with
where is the golden ratio. For the ternary comma code (i.e., encoding in base 3, represented with 2 bits per symbol), the implicit distribution is a power law with .
The figures below compare Elias Gamma, Elias Delta, Fibonacci, and various Rice codings for bits used to encode integer values. A baseline, "direct binary", for binary encoding where the size is already known is also included.
References
- David J. C. MacKay. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. ISBN 0521642981
- [3]
External links
- On-line textbook: Information Theory, Inference, and Learning Algorithms, by David MacKay, has a chapter on codes for integers, including an accessible introduction to Elias codes.