# Mathematics of cyclic redundancy checks

(Redirected from Mathematics of CRC)

The cyclic redundancy check (CRC) is based on division in the ring of polynomials over the finite field GF(2) (the integers modulo 2), that is, the set of polynomials where each coefficient is either zero or one, and arithmetic operations wrap around (due to the nature of binary arithmetic).

Any string of bits can be interpreted as the coefficients of a message polynomial of this sort, and to find the CRC, we multiply the message polynomial by $x^n$ and then find the remainder when dividing by the degree-$n$ generator polynomial. The coefficients of the remainder polynomial are the bits of the CRC.

The larger (and more complex) a CRC is, the more its properties resemble a cryptographic hash. One MPEG CRC-32 may easily collide with another but CRC-40 collisions are less likely.[1] However, as they are linear functions, CRC operations are trivially easy to distinguish from random oracles.

## Math

In general form:

$M(x) \cdot x^n = Q(x) \cdot G(x) + R (x)$

Here $M(x)$ is the original message polynomial and $G(x)$ is the degree-$n$ generator polynomial. The bits of $M(x) \cdot x^n$ are the original message with $n$ zeroes added at the end. The CRC 'checksum' is formed by the coefficients of the remainder polynomial $R(x)$ whose degree is strictly less than $n$. The quotient polynomial $Q(x)$ is of no interest.

In communication, the sender attaches the $n$ bits of R after the original message bits of M, which could be shown to be equivalent to sending out $M(x) \cdot x^n - R(x)$ (the codeword.) The receiver, knowing $G(x)$ and therefore $n$, separates M from R and repeats the calculation, verifying that the received and computed R are equal. If they are, then the receiver assumes the received message bits are correct.

In practice CRC calculations most closely resemble long division in binary, except that the subtractions involved do not borrow from more significant digits, and thus become exclusive or operations.

A CRC is a checksum in a strict mathematical sense, as it can be expressed as the weighted modulo-2 sum of per-bit syndromes, but that word is generally reserved more specifically for sums computed using larger moduli, such as 10, 256, or 65535.

CRCs can also be used as part of error-correcting codes, which allow not only the detection of transmission errors, but the reconstruction of the correct message. These codes are based on closely related mathematical principles.

## Polynomial arithmetic modulo 2

Since the coefficients are constrained to a single bit, any math operation on CRC polynomials must map the coefficients of the result to either zero or one. For example in addition:

$(x^3 + x) + (x + 1) = x^3 + 2x + 1 \equiv x^3 + 1 \pmod 2$

Note that $2x$ becomes zero in the above equation because addition of coefficients is performed modulo 2:

$2x = x + x = x\times(1 + 1) \equiv x\times0 = 0 \pmod 2$

Multiplication is similar:

$(x^2 + x)(x + 1) = x^3 + 2x^2 + x \equiv x^3 + x \pmod 2$

We can also divide polynomials mod 2 and find the quotient and remainder. For example, suppose we're dividing $x^3 + x^2 + x$ by $x + 1$. We would find that

$\frac{x^3 + x^2 + x}{x+1} = (x^2 + 1) - \frac{1}{x+1}$

In other words,

$(x^3 + x^2 + x) = (x^2 + 1)(x + 1) - 1 \equiv (x^2 + 1)(x + 1) + 1 \pmod 2$

The division yields a quotient of x2 + 1 with a remainder of −1, which, since it is odd, has a last bit of 1.

In the above equations, $x^2+x+1$ represents the original message bits 111, $x+1$ is the generator polynomial, and the remainder $1$ (equivalently, $x^0$) is the CRC. The degree of the generator polynomial is 1, so we first multiplied the message by $x^1$ to get $x^3 + x^2 + x$.

## Variations

There are several standard variations on CRCs, any or all of which may be used with any CRC polynomial. Implementation variations such as endianness and CRC presentation only affect the mapping of bit strings to the coefficients of $M(x)$ and $R(x)$, and do not impact the properties of the algorithm.

• To check the CRC, instead of calculating the CRC on the message and comparing it to the CRC, a CRC calculation may be run on the entire codeword. If the result is zero, the check passes. This works because the codeword is $M(x) \cdot x^n - R(x) = Q(x) \cdot G(x)$, which is always divisible by $G(x)$.
This simplifies many implementations by avoiding the need to treat the last few bytes of the message specially when checking CRCs.
• The shift register may be initialized with ones instead of zeroes. This is equivalent to inverting the first $n$ bits of the message feeding them into the algorithm. The CRC equation becomes $M(x) \cdot x^n + \sum_{i=m}^{m+n-1} x^i = Q(x) \cdot G(x) + R (x)$, where $m > \deg(M(x))$ is the length of the message in bits. The change this imposes on $R(x)$ is a function of the generating polynomial and the message length, $\left ( \sum_{i=m}^{m+n-1} x^i \right ) \bmod G(x)$.
The reason this method is used is because an unmodified CRC does not distinguish between two messages which differ only in the number of leading zeroes, because leading zeroes do not affect the value of $M(x)$. When this inversion is done, the CRC does distinguish between such messages.
• The CRC may be inverted before being appended to the message stream. While an unmodified CRC distinguishes between messages $M(x)$ with different numbers of trailing zeroes, it does not detect trailing zeroes appended after the CRC remainder itself. This is because all valid codewords are multiples of $G(x)$, so $x$ times that codeword is also a multiple. (In fact, this is precisely why the first variant described above works.)

In practice, the last two variations are invariably used together. They change the transmitted CRC, so must be implemented at both the transmitter and the receiver. While presetting the shift register to ones is straightforward to do at both ends, inverting affects receivers implementing the first variation, because the CRC of a full codeword that already includes a CRC is no longer zero. Instead, it is a fixed non-zero pattern, the CRC of the inversion pattern of $n$ ones.

The CRC thus may be checked either by the obvious method of computing the CRC on the message, inverting it, and comparing with the CRC in the message stream, or by calculating the CRC on the entire codeword and comparing it with an expected fixed value $C(x)$, called the check polynomial, residue or magic number. This may be computed as $C(x) = \left ( \sum_{i=n}^{2n-1} x^i \right ) \bmod G(x)$, or equivalently by computing the unmodified CRC of a message consisting of $n$ ones, $M(x) = \sum_{i=0}^{n-1} x^i$.

These inversions are extremely common but not universally performed, even in the case of the CRC-32 or CRC-16-CCITT polynomials.

## Reversed representations and reciprocal polynomials

### Polynomial representations

All the well-known CRC generator polynomials of degree $n$ have two common hexadecimal representations. In both cases, the coefficient of $x^n$ is omitted and understood to be 1.

• The msbit-first representation is a hexadecimal number with $n$ bits, the least significant bit of which is always 1. The most significant bit represents the coefficient of $x^{n-1}$ and the least significant bit represents the coefficient of $x^0$.
• The lsbit-first representation is a hexadecimal number with $n$ bits, the most significant bit of which is always 1. The most significant bit represents the coefficient of $x^0$ and the least significant bit represents the coefficient of $x^{n-1}$.

The msbit-first form is often referred to in the literature as the normal representation, while the lsbit-first is called the reversed representation. It is essential to use the correct form when implementing a CRC. If the coefficient of $x^{n-1}$ happens to be zero, the forms can be distinguished at a glance by seeing which end has the bit set.

To further confuse the matter, the paper by P. Koopman and T. Chakravarty [2][3] converts CRC generator polynomials to hexadecimal numbers in yet another way: msbit-first, but including the $x^n$ coefficient and omitting the $x^0$ coefficient. This "Koopman" representation has the advantage that the degree can be determined from the hexadecimal form and the coefficients are easy to read off in left-to-right order. However, it is not used anywhere else and is not recommended due to the risk of confusion.

### Reciprocal polynomials

A reciprocal polynomial is created by assigning the $x^i$ through $x^0$ coefficients of one polynomial to the $x^0$ through $x^i$ coefficients of a new polynomial. That is, the reciprocal of the $n$ bit polynomial $G(x)$ is $x^nG(x^{-1})$. Example: The reverse of $x^{16} + x^{15} + x^2 + 1$ is $x^{16} + x^{14} + x^1 + 1$. The most interesting property of reciprocal polynomials, when used in CRCs, is that they have exactly the same error-detecting strength as the polynomials they are reciprocals of. The reciprocal of a polynomial generates the same codewords, only bit reversed — that is, if all but the first $n$ bits of a codeword under the original polynomial are taken, reversed and used as a new message, the CRC of that message under the reciprocal polynomial equals the reverse of the first $n$ bits of the original codeword. But the reciprocal polynomial is not the same as the original polynomial, and the CRCs generated using it are not the same (even modulo bit reversal) as those generated by the original polynomial.

## Error detection strength

The error-detection ability of a CRC depends on the degree of its key polynomial and on the specific key polynomial used. The "error polynomial" $E(x)$ is the symmetric difference of the received message codeword and the correct message codeword. An error will go undetected by a CRC algorithm if and only if the error polynomial is divisible by the CRC polynomial.

• Because a CRC is based on division, no polynomial can detect errors consisting of a string of zeroes prepended to the data, or of missing leading zeroes. However, see Variations.
• All single bit errors will be detected by any polynomial with at least two terms with non-zero coefficients. The error polynomial is $x^k$, and $x^k$ is divisible only by polynomials $x^i$ where $i \le k$.
• All two bit errors separated by a distance less than the order of the primitive polynomial which is a factor of the generator polynomial will be detected. The error polynomial in the two bit case is $E(x) = x^i + x^k = x^k \cdot (x^{i-k} + 1), \; i > k$. As noted above, the $x^k$ term will not be divisible by the CRC polynomial, which leaves the $x^{i-k} + 1$ term. By definition, the smallest value of ${i-k}$ such that a polynomial divides $x^{i-k} + 1$ is the polynomial's order or exponent. The polynomials with the largest order are called primitive polynomials, and for polynomials of degree $n$ with binary coefficients, have order $2^n - 1$.
• All errors in an odd number of bits will be detected by a polynomial which is a multiple of $x+1$. This is equivalent to the polynomial having an even number of terms with non-zero coefficients. This capacity assumes that the generator polynomial is the product of $x+1$ and a primitive polynomial of degree $n-i$ since all primitive polynomials except $x+1$ have an odd number of non-zero coefficients.
• All burst errors of length $n$ will be detected by any polynomial of degree $n$ or greater which has a non-zero $x^0$ term.

(As an aside, there is never reason to use a polynomial with a zero $x^0$ term. Recall that a CRC is the remainder of the message polynomial times x^n divided by the CRC polynomial. A polynomial with a zero $x^0$ term always has $x$ as a factor. So if $K(x)$ is the original CRC polynomial and $K(x) = x \cdot K'(x)$, then

$M(x) \cdot x^{n-1} = Q(x) \cdot K'(x) + R(x)$
$M(x) \cdot x^n = Q(x) \cdot x \cdot K'(x) + x \cdot R(x)$
$M(x) \cdot x^n = Q(x) \cdot K(x) + x \cdot R(x)$

That is, the CRC of any message with the $K(x)$ polynomial is the same as that of the same message with the $K'(x)$ polynomial with a zero appended. It is just a waste of a bit.)

The combination of these factors means that good CRC polynomials are often primitive polynomials (which have the best 2-bit error detection) or primitive polynomials of degree $n-1$, multiplied by $x+1$ (which detects all odd numbers of bit errors, and has half the two-bit error detection ability of a primitive polynomial of degree $n$).[2]

### Bitfilters

Analysis Technique using bitfilters[2] allows one to very efficiently determine the properties of a given generator polynomial. The results are the following:

1. All burst errors (but one) with length no longer than the generator polynomial can be detected by any generator polynomial $1+\cdots+X^n$. This includes 1-bit errors (burst of length 1). The maximum length is $n+1$, when $n$ is the degree of the generator polynomial (which itself has a length of $n+1$). The exception to this result is a bit pattern the same as that of the generator polynomial.
2. All uneven bit errors are detected by generator polynomials with even number of terms.
3. 2-bit errors in a (multiple) distance of the longest bitfilter of even parity to a generator polynomial are not detected; all others are detected. For degrees up to 32 there is an optimal generator polynomial with that degree and even number of terms; in this case the period mentioned above is $2^{n-1}-1$. For $n=16$ this means that blocks of 32767 bits length do not contain undiscovered 2-bit errors. For uneven number of terms in the generator polynomial there can be a period of $2^n-1$; however, these generator polynomials (with odd number of terms) do not discover all odd number of errors, so they should be avoided. A list of the corresponding generators with even number of terms can be found in the link mentioned at the beginning of this section.
4. All single bit errors within the bitfilter period mentioned above (for even terms in the generator polynomial) can be identified uniquely by their residual. So CRC method can be used to correct single-bit errors as well (within those limits, e.g. 32767 bits with optimal generator polynomials of degree 16). Since all odd errors leave an odd residual, all even an even residual, 1-bit errors and 2-bit errors can be distinguished. However, like other SECDED techniques, CRCs cannot always distinguish between 1-bit errors and 3-bit errors. When 3 or more bit errors occur in a block, CRC bit error correction will be erroneous itself and produce more errors.