Kraft's inequality

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In coding theory, Kraft's inequality, named after Leon Kraft, gives both a necessary and sufficient condition for the existence of a prefix code[1] for a given set of codeword lengths. Its applications to prefix codes and trees often find use in computer science and information theory.

More specifically, Kraft's inequality limits the lengths of codewords in a prefix code: if one takes an exponential of the length of each valid codeword, the resulting set of values must look like a probability mass function, that is, it must have total measure less than or equal to one. Kraft's inequality can be thought of in terms of a constrained budget to be spent on codewords, with shorter codewords being more expensive.

  • If Kraft's inequality holds with strict inequality, the code has some redundancy.
  • If Kraft's inequality holds with equality, the code in question is a complete code.
  • If Kraft's inequality does not hold, the code is not uniquely decodable.

Kraft's inequality was published by Kraft (1949). However, Kraft's paper discusses only prefix codes, and attributes the analysis leading to the inequality to Raymond Redheffer. The inequality is sometimes also called the Kraft–McMillan theorem after the independent discovery of the result by McMillan (1956); McMillan proves the result for the general case of uniquely decodable codes, and attributes the version for prefix codes to a spoken observation in 1955 by Joseph Leo Doob.

Formal statement[edit]

Let each source symbol from the alphabet

be encoded into a uniquely decodable code over an alphabet of size with codeword lengths


Conversely, for a given set of natural numbers satisfying the above inequality, there exists a uniquely decodable code over an alphabet of size with those codeword lengths.

A commonly occurring special case of a uniquely decodable code is a prefix code. Kraft's inequality therefore also holds for any prefix code.

Example: binary trees[edit]

9, 14, 19, 67 and 76 are leaf nodes at depths of 3, 3, 3, 3 and 2, respectively.

Any binary tree can be viewed as defining a prefix code for the leaves of the tree. Kraft's inequality states that

Here the sum is taken over the leaves of the tree, i.e. the nodes without any children. The depth is the distance to the root node. In the tree to the right, this sum is


Proof for prefix codes[edit]

Example for binary tree. Red nodes represent a prefix tree. The method for calculating the number of descendant leaf nodes in the full tree is shown.

Suppose that . Let be the full -ary tree of depth . Every word of length over an -ary alphabet corresponds to a node in this tree at depth . The th word in the prefix code corresponds to a node ; let be the set of all leaf nodes (i.e. of nodes at depth ) in the subtree of rooted at . That subtree being of height , we have

Since the code is a prefix code, those subtrees cannot share any leaves, which means that


Thus, given that the total number of nodes at depth is , we have

from which the result follows.

Conversely, given any ordered sequence of natural numbers,

satisfying the Kraft inequality, one can construct a prefix code with codeword lengths equal to each by choosing a word of length arbitrarily, then ruling out all words of length that have it as a suffix. There again, we shall interpret this in terms of leaf nodes of an -ary tree of depth . First choose any node from the full tree at depth ; it corresponds to the first word of our new code. Since we are building a prefix code, all the descendants of this node (i.e., all words that have this first word as a prefix) become unsuitable for inclusion in the code. There are such descendants to be removed from consideration. The next iteration removes nodes, and so on. After iterations, we have removed a total of

nodes. The question is whether we need to remove more leaf nodes than we actually have available — in all — in the process of building the code. Since the Kraft inequality holds, we have indeed

and thus a prefix code can be built. Note that as the choice of nodes at each step is largely arbitrary, many different suitable prefix codes can be built, in general.

Proof of the general case[edit]

Consider the generating function in inverse of x for the code S

in which —the coefficient in front of —is the number of distinct codewords of length . Here min is the length of the shortest codeword in S, and max is the length of the longest codeword in S.

Consider all m-powers Sm, in the form of words , where are indices between 1 and n. Note that, since S was assumed to uniquely decodable, implies . Because of this property, one can compute the generating function for from the generating function as

Here, similarly as before, —the coefficient in front of in —is the number of words of length in . Clearly, cannot exceed . Hence for any positive x

Substituting the value x = r we have

for any positive integer . The left side of the inequality grows exponentially in and the right side only linearly. The only possibility for the inequality to be valid for all is that . Looking back on the definition of we finally get the inequality.

Alternative construction for the converse[edit]

Given a sequence of natural numbers,

satisfying the Kraft inequality, we can construct a prefix code as follows. Define the ith codeword, Ci, to be the first li digits after the radix point (e.g. decimal point) in the base r representation of

Note that by Kraft's inequality, this sum is never more than 1. Hence the codewords capture the entire value of the sum. Therefore, for j > i, the first li digits of Cj form a larger number than Ci, so the code is prefix free.

See also Canonical Huffman code.


  1. ^ Cover, Thomas M.; Thomas, Joy A. (2006), Elements of Information Theory (PDF) (2nd ed.), John Wiley & Sons, Inc, pp. 108–109, doi:10.1002/047174882X.ch5, ISBN 0-471-24195-4 


See also[edit]