# Luby transform code

(Redirected from LT codes)

In computer science, Luby transform codes (LT codes) are the first class of practical fountain codes that are near-optimal erasure correcting codes. They were invented by Michael Luby in 1998 and published in 2002.[1] Like some other fountain codes, LT codes depend on sparse bipartite graphs to trade reception overhead for encoding and decoding speed. The distinguishing characteristic of LT codes is in employing a particularly simple algorithm based on the exclusive or operation ($\oplus$) to encode and decode the message.[2]

LT codes are rateless because the encoding algorithm can in principle produce an infinite number of message packets (i.e., the percentage of packets that must be received to decode the message can be arbitrarily small). They are erasure correcting codes because they can be used to transmit digital data reliably on an erasure channel.

The next generation beyond LT codes are raptor codes (see for example IETF RFC 5053 or IETF RFC 6330), which have linear time encoding and decoding. Raptor codes use two encoding stages for encoding, where the second stage is an LT encoding.

## Why use an LT code?

The traditional scheme for transferring data across an erasure channel depends on continuous two-way communication.

• The sender encodes and sends a packet of information.
• The receiver attempts to decode the received packet. If it can be decoded, the receiver sends an acknowledgment back to the transmitter. Otherwise, the receiver asks the transmitter to send the packet again.
• This two-way process continues until all the packets in the message have been transferred successfully.

Certain networks, such as ones used for cellular wireless broadcasting, do not have a feedback channel. Applications on these networks still require reliability. Fountain codes in general, and LT codes in particular, get around this problem by adopting an essentially one-way communication protocol.

• The sender encodes and sends packet after packet of information.
• The receiver evaluates each packet as it is received. If there is an error, the erroneous packet is discarded. Otherwise the packet is saved as a piece of the message.
• Eventually the receiver has enough valid packets to reconstruct the entire message. When the entire message has been received successfully the receiver signals that transmission is complete.

## LT encoding

The encoding process begins by dividing the uncoded message into n blocks of roughly equal length. Encoded packets are then produced with the help of a pseudorandom number generator.

• The degree d, 1 ≤ d ≤ n, of the next packet is chosen at random.
• Exactly d blocks from the message are randomly chosen.
• If Mi is the ith block of the message, the data portion of the next packet is computed as
$M_{i_1} \oplus M_{i_2} \oplus \cdots \oplus M_{i_d}\,$
where {i1i2, …, id} are the randomly chosen indices for the d blocks included in this packet.
• A prefix is appended to the encoded packet, defining how many blocks n are in the message, how many blocks d have been exclusive-ored into the data portion of this packet, and the list of indices {i1i2, …, id}.
• Finally, some form of error-detecting code (perhaps as simple as a cyclic redundancy check) is applied to the packet, and the packet is transmitted.

This process continues until the receiver signals that the message has been received and successfully decoded.

## LT decoding

The decoding process uses the "exclusive or" operation to retrieve the encoded message.

• If the current packet isn't clean, or if it replicates a packet that has already been processed, the current packet is discarded.
• If the current cleanly received packet is of degree d > 1, it is first processed against all the fully decoded blocks in the message queuing area (as described more fully in the next step), then stored in a buffer area if its reduced degree is greater than 1.
• When a new, clean packet of degree d = 1 (block Mi) is received (or the degree of the current packet is reduced to 1 by the preceding step), it is moved to the message queueing area, and then matched against all the packets of degree d > 1 residing in the buffer. It is exclusive-ored into the data portion of any buffered packet that was encoded using Mi, the degree of that matching packet is decremented, and the list of indices for that packet is adjusted to reflect the application of Mi.
• When this process unlocks a block of degree d = 2 in the buffer, that block is reduced to degree 1 and is in its turn moved to the message queueing area, and then processed against the packets remaining in the buffer.
• When all n blocks of the message have been moved to the message queueing area, the receiver signals the transmitter that the message has been successfully decoded.

This decoding procedure works because A $\oplus$ A = 0 for any bit string A. After d − 1 distinct blocks have been exclusive-ored into a packet of degree d, the original unencoded content of the unmatched block is all that remains. In symbols we have

\begin{align} & {} \qquad (M_{i_1} \oplus \dots \oplus M_{i_d}) \oplus (M_{i_1} \oplus \dots \oplus M_{i_{k-1}} \oplus M_{i_{k+1}} \oplus \dots \oplus M_{i_d}) \\ & = M_{i_1} \oplus M_{i_1} \oplus \dots \oplus M_{i_{k-1}} \oplus M_{i_{k-1}} \oplus M_{i_k} \oplus M_{i_{k+1}} \oplus M_{i_{k+1}} \oplus \dots \oplus M_{i_d} \oplus M_{i_d} \\ & = 0 \oplus \dots \oplus 0 \oplus M_{i_k} \oplus 0 \oplus \dots \oplus 0 \\ & = M_{i_k} \, \end{align}

## Variations

Several variations of the encoding and decoding processes described above are possible. For instance, instead of prefixing each packet with a list of the actual message block indices {i1i2, …, id}, the encoder might simply send a short "key" which served as the seed for the pseudorandom number generator (PRNG) or index table used to construct the list of indices. Since the receiver equipped with the same RNG or index table can reliably recreate the "random" list of indices from this seed, the decoding process can be completed successfully. Alternatively, by combining a simple LT code of low average degree with a robust error-correcting code, a raptor code can be constructed that will outperform an optimized LT code in practice.[3]

## Optimization of LT codes

There is only one parameter that can be used to optimize a straight LT code: the degree distribution function (described as a pseudorandom number generator for the degree d in the LT encoding section above). In practice the other "random" numbers (the list of indices { i1i2, …, id } ) are invariably taken from a uniform distribution on [0, n), where n is the number of blocks into which the message has been divided.[4]

Luby himself[1] discussed the "ideal soliton distribution" defined by

\begin{align} \mathrm{P}\{d=1\}& = \frac{1}{n}\\[2pt] \mathrm{P}\{d=k\}& = \frac{1}{k(k-1)} \qquad (k=2,3,\dots,n). \, \end{align}

This degree distribution theoretically minimizes the expected number of redundant code words that will be sent before the decoding process can be completed. However the ideal soliton distribution does not work well in practice because any fluctuation around the expected behavior makes it likely that at some step in the decoding process there will be no available packet of (reduced) degree 1 so decoding will fail. Furthermore, some of the original blocks will not be xor-ed into any of the transmission packets. Therefore, in practice, a modified distribution, the "robust soliton distribution", is substituted for the ideal distribution. The effect of the modification is, generally, to produce more packets of very small degree (around 1) and fewer packets of degree greater than 1, except for a spike of packets at a fairly large quantity chosen to ensure that all original blocks will be included in some packet.[4]