Jump to content

User:Script3r/sandbox: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Script3r (talk | contribs)
Created page with '=='''The contents of this page are licensed under the Creative Commons Attribution-Sharealike 3.0 License.'''== ---- =='''As part of our assignment we have to ma...'
(No difference)

Revision as of 21:21, 30 April 2013

The contents of this page are licensed under the Creative Commons Attribution-Sharealike 3.0 License.


As part of our assignment we have to make a Wikipedia entry for the same topic. Hence I will be copying/donating the same text to Wikipedia too. I am writing this message here to assure you that I own this page and I only will be doing the corresponding Wikipedia entry under the user name : script3r. Also I assure you that this message will not be removed from this page for future references. Thanks.

Motivation


There are many codes that have been designed to correct random errors. Sometimes, however, channels may introduce errors which are localized in a short interval. Such errors occur in a burst (called as burst errors because they are occur in many consecutive bits). Examples of burst errors can be found extensively in storage mediums. These errors may be due to physical damage such as scratch on a disc or a stroke of lightning in case of wireless channel. They are not independent; they tend to be spatially concentrated. If one bit has an error, it is likely that the adjacent bits could also be corrupted. The methods used to correct random errors are inefficient to correct burst errors. This motivates burst error correcting codes.

Definitions


Burst Error

A burst error is a contiguous sequence of symbols, received over a data transmission channel, such that the first and last symbols are in error and there exists no contiguous sub-sequence of m (referred to as the guard band of the error burst) correctly received symbols within the error burst. In other words, a burst error is a string of corrupt data, measured as the length between (and including) the first and last error signals.


For example, e = (00000010001), is a burst of length 5.

https://wiki.cse.buffalo.edu/cse545/sites/wiki.cse.buffalo.edu.cse545/files/96/burst3.jpg


A burst error of length t can be described in terms of a polynomial as $e(x) = x^{i}b(x) \pmod{x^{n} - 1}$ , where $b(x)$ is a polynomial of degree (t - 1) which describes the error pattern, and i indicates where the burst begins.

For example, in e = (00000010001), we have $e(x) = x^6(1 + x^4)$

Similarly, when e = (01010110000), $e(x) = x(1 + x^2 + x^4 + x^5)$

A burst of length $l$ [8]

Say a codeword $C$ is transmitted, and it is received as $Y = C + E$. Then, the error vector $E$ is called a burst of length $l$ if the number of nonzero components of $E$ is confined to $l$ consecutive components. For example, $E = (0\textbf{1000011}0)$ is a burst of length $l = 7$.

Although this definition is sufficient to describe what a burst error is, the majority of the tools developed for burst error correction rely on cyclic codes. This motivates our next definition.

A cyclic burst of length $l$ [8]

An error vector $E$ is called a cyclic burst error of length $l$ if its nonzero components are confined to $l$ cyclically consecutive components. For example, the previously considered error vector $E = (010000110)$, is a cyclic burst of length $l = 5$, since we consider the error starting at position $6$ and ending at position $1$. Notice the indices are $0$-based, that is, the first element is at position $0$.

For the remainder of this article, we will use the term burst to refer to a cyclic burst, unless noted otherwise.

Burst description [8]

It is often useful to have a compact definition of a burst error, that encompasses not only its length, but also the pattern, and location of such error. We define a a burst description to be a tuple $(P,L)$ where $P$ is the pattern of the error (that is the string of symbols beginning with the first nonzero entry in the error pattern, and ending with the last nonzero symbol), and $L$ is the location, on the codeword, where the burst can be found.

For example, the burst description of the error pattern $E = (010000110)$ is $D = (1000011,1)$. Notice that such description is not unique, because $D' = (11001,6)$ is describing the same burst error. In general, if the number of nonzero components in $E$ is $w$, then $E$ will have $w$ different burst descriptions (each starting at a different nonzero entry of $E$).

We now present a theorem that remedies some of the issues that arise by the ambiguity of burst descriptions.

Theorem If $E$ is an error vector of length $n$ with two burst descriptions $(P_1,L_1)$ and $(P_2,L_2)$. If $length(P_1) + length(P_2) \le n + 1$ (where $length(y)$ is the number of symbols in the error pattern $y$), then the two descriptions are identical (that is, their components are equivalent) [2]

Proof Let $w$ be the weight (or the number of nonzero entries) of $E$. Then $E$ has exactly $w$ error descriptions. For $w = 0$ or $w = 1$, there is nothing prove. So, we consider the cases where $w \ge 2$. Assume that the descriptions are not identical. We notice that each nonzero entry of $E$ will appear in the pattern, and so, the components of $E$ not included in the pattern will form a cyclic run of 0's, beginning after the last nonzero entry, and continuing just before the first nonzero entry of the pattern. We call the set of indices corresponding to this run as the zero run. Let's consider the zero runs for the error pattern $E = (010000110)$.

Error Pattern Location Zero run
1000011 1 (8,0)
11001 6 (2,3,4,5)
100100001 7 none

We immediately observe that each burst description has a zero run associated with it. But most importantly, we notice that each zero run is disjoint. Since we have $w$ zero runs, and each is disjoint, if we count the number of distinct elements in all the zero runs, we get we have a total of $n-w$. With this observation in mind, we have a total of $(n - length(P_1)) + length(n - length(P_2))$ zeros in $E$. But, since $length(P_1) + length(P_2) \le n+1$, this number is $\ge n - 1$, which contradicts that $w \ge 2$. Thus, the burst error descriptions are identical.

A corollary of the above theorem is that we cannot have two distinct burst descriptions for bursts of length $\e (n+1)/2$.


Cyclic Codes for Burst Error Correction


Cyclic Codes are defined as follows: Think of the $q$ symbols as elements in $\mathbb{F}_q$. Now, we can think of words as polynomials over $\mathbb{F}_q$, where the individual symbols of a word correspond to the different coefficients of the polynomial. To define a cyclic code, we pick a fixed polynomial, called "the generator polynomial." The codewords of this cyclic code are all the polynomials that are divisible by this generator polynomial.

Codewords are polynomials of degree $\leq n-1$. Suppose that the generator polynomial $g(x)$ has degree $r$. Polynomials of degree $\leq n-1$ that are divisible by $g(x)$ result from multiplying $g(x)$ by polynomials of degree $\leq n-1-r$. We have $q^{n-r}$ such polynomials. Each one of them corresponds to a codeword. Therefore, $k=n-r$ for cyclic codes.

Cyclic codes can detect all bursts of length up to $l=n-k=r$. We will see later that the burst error detection ability of any $(n, k)$ code is upper bounded by $l \leq n-k$. Because cyclic codes meet that bound, they are considered optimal for burst error detection. This claim is proved by the following theorem:

Theorem Every cyclic code with generator polynomial of degree $r$ can detect all bursts of length $\leq r$.

Proof To prove this, we need to prove that if you add a burst of length $\leq r$ to a codeword (i.e. to a polynomial that is divisible by $g(x)$), then the result is not going to be a codeword (i.e. the corresponding polynomial is not going to be divisible by $g(x)$). It suffices to show that no burst of length $\leq r$ is divisible by $g(x)$. Such a burst has the form $x^i b(x)$, where $b(x)$ has degree < $r$. Therefore, $b(x)$ is not divisible by $g(x)$ (because the latter has degree $r$). $g(x)$ is not divisible by $x$ (Otherwise, all codewords would start with $0$). Therefore, $x^i$ is not divisible by $g(x)$ as well.

The above proof suggests a simple algorithm for burst error detection/correction in cyclic codes: given a transmitted word (i.e. a polynomial of degree $\leq n-1$), compute the remainder of this word when divided by $g(x)$. If the remainder is zero (i.e. if the word is divisible by $g(x)$), then it is a valid codeword. Otherwise, report an error. To correct this error, subtract this remainder from the transmitted word. The subtraction result is going to be divisible by $g(x)$ (i.e. it is going to be a valid codeword).

By the upper bound on burst error detection ($l \leq n-k = r$), we know that a cyclic code can not detect $all$ bursts of length $l$ > $r$. But luckily, it turns out that cyclic codes can indeed detect $most$ bursts of length > $r$. The reason is that detection fails only when the burst is divisible by $g(x)$. Over binary alphabets, there exist $2^{l-2}$ bursts of length $l$. Out of those, only $2^{l-2-r}$ are divisible by $g(x)$. Therefore, the detection failure probability is very small ($2^{-r}$) assuming a uniform distribution over all bursts of length $l$.

We now consider a fundamental theorem about cyclic codes that will aid in designing efficient burst-error correcting codes, by categorizing bursts into different cosets.

Theorem A linear code C is an l-burst-error-correcting code iff all the burst errors of length $l$ or less lie in distinct cosets of $C$.

Proof Consider two different burst errors e1 and e2 of length $l$ or less which lie in same coset of codeword C. When we take difference between the errors e1 and e2, we get c $(c = e1 - e2)$ such that c is a code-word. Hence, if we receive e1, we can decode it either to 0 or c. In contrast, if all the burst errors e1 and e2 do not lie in same coset, then each burst error is determined by its syndrome. The error can then be corrected through its syndrome. Thus, A linear code C is an l-burst-error-correcting code if and only if all the burst errors of length $l$ or less lie in distinct cosets of C.

Theorem Let C be an [n, k]-linear l-burst-error-correcting code. Then no nonzero burst of length $2l$ or less can be a codeword.

Proof Consider existence of a codeword c which has the burst length less than or equal to $2l$. Thus, c has the pattern ($0$, 1, u, v, 1, $0$), where u and v are two words of length ≤ $l$ − 1. Hence, the words w = ($0$, 1, u, $0$, $0$, $0$) and c − w = ($0$, $0$, $0$, v, 1, $0$) are two bursts of length ≤$l$. For binary linear codes, they belong to the same coset. This is a contradiction to Theorem stated above. Thus it follows that no nonzero burst of length $2l$ or less can be a codeword.

Burst Error Correction Bounds


Upper bounds on Burst Error Detection and Correction

By upper bound, we mean a limit on our error detection ability that we can never go beyond. Suppose that we want to design an $(n, k)$ code that can detect all burst errors of length $\leq l$. A natural question to ask is: given $n$ and $k$, what is the maximum $l$ that we can never achieve beyond? In other words, what is the upper bound on the length $l$ of bursts that we can detect using any $(n, k)$ code? The following theorem provides an answer to this question.

Theorem The burst error detection ability of any $(n, k)$ code is $l \leq n-k$.

Proof To prove this, we start by making the following observation: A code can detect all bursts of length $\leq l$ if and only if no two codewords differ by a burst of length $\leq l$. Suppose that we have two code words $\mathbf{c}_1$ and $\mathbf{c}_2$ that differ by a burst $\mathbf{b}$ of length $\leq l$. Upon receiving $\mathbf{c}_1$, we can not tell whether the transmitted word is indeed $\mathbf{c}_1$ with no transmission errors, or whether it is $\mathbf{c}_2$ with a burst error $\mathbf{b}$ that occurred during transmission. Now, suppose that every two codewords differ by more than a burst of length $l$. Even if the transmitted codeword $\mathbf{c}_1$ is hit by a burst $\mathbf{b}$ of length $l$, it is not going to change into another valid codeword. Upon receiving it, we can tell that this is $\mathbf{c}_1$ with a burst $\mathbf{b}$. By the above observation, we know that no two codewords can share the first $n-l$ symbols. The reason is that even if they differ in all the other $l$ symbols, they are still going to be different by a burst of length $l$. Therefore, the number of codewords $q^k$ satisfies $q^k \leq q^{n-l}$. By taking the logarithm to the base $q$ and rearranging, we can see that $l \leq n-k$.

Now, we repeat the same question but for error correction: given $n$ and $k$, what is the upper bound on the length $l$ of bursts that we can correct using any $(n, k)$ code? The following theorem provides a preliminary answer to this question. However later on, we will see that the Rieger bound is going to provide a stronger answer..

Theorem The burst error correction ability of any $(n, k)$ code satisfies $l \leq n-k-\mathrm{log}_q (n-l)+2$

Proof We start with the following observation: A code can correct all bursts of length $\leq l$ if and only if no two codewords differ by the sum of two bursts of length $\leq l$. Suppose that two codewords $\mathbf{c}_1$ and $\mathbf{c}_2$ differ by two bursts $\mathbf{b}_1$ and $\mathbf{b}_2$ of length $\leq l$ each. Upon receiving $\mathbf{c}_1$ hit by a burst $\mathbf{b}_1$, we could interpret that as if it was $\mathbf{c}_2$ hit by a burst $-\mathbf{b}_2$. We can not tell whether the transmitted word is $\mathbf{c}_1$ or $\mathbf{c}_2$. Now, suppose that every two codewords differ by more than two bursts of length $l$. Even if the transmitted codeword $\mathbf{c}_1$ is hit by a burst of length $l$, it is not going to look like another codeword that has been hit by another burst. For each codeword $\mathbf{c}$, let $\textnormal{B}(\mathbf{c})$ denote the set of all words that differ from $\mathbf{c}$ by a burst of length $\leq l$. Notice that $\textnormal{B}(\mathbf{c})$ includes $\mathbf{c}$ itself. By the above observation, we know that for two different codewords $\mathbf{c}_i$ and $\mathbf{c}_j$, $\textnormal{B}(\mathbf{c}_i)$ and $\textnormal{B}(\mathbf{c}_j)$ are disjoint. We have $q^k$ codewords. Therefore, we can say that $q^k |\textnormal{B}(\mathbf{c})| \leq q^n$. Moreover, we have $(n-l)q^{l-2} \leq |\textnormal{B}(\mathbf{c})|$. By plugging the latter inequality into the former, then taking the base $q$ logarithm and rearranging, we get the above theorem. This theorem is weaker than the Rieger bound, which we will discuss later.

Rieger Bound

Theorem If $l$ is the burst error correcting ability of an [n, k] linear block code, then $2l ≤ n − k$.

Proof Any linear code that can correct burst pattern of length l or less cannot have a burst of length $2l$ or less as a codeword. If it had burst of length $2l$ or less as a codeword, then a burst of length l could change the codeword to burst pattern of length $l$, which also could be obtained by making a burst error of length $l$ in all zero codeword. If vectors are non-zero in first $2l$ symbols, then the vectors should be from different subsets of an array so that their difference is not a codeword of bursts of length $2l$. Ensuring this condition, the number of such subsets is at least equal to number of vectors. Thus, number of subsets would be at least . Hence, we have at least $2l$ distinct symbols, otherwise, difference of two such polynomials would be a codeword that is a sum of 2 bursts of length ≤ $l$. Thus, this proves Rieger Bound. A linear burst-error-correcting code achieving the above Rieger bound is called an optimal burst-error-correcting code.

Implications of Rieger Bound

The implication of this bound has to deal with burst error correcting efficiency as well as the interleaving schemes that would work for burst error correction. We define the notion of burst error correcting efficiency as below:

Burst error correcting efficiency : The burst error correcting efficiency of an (n, k) linear block code with burst error correcting capacity of $l$ is given as Burst error correction efficiency = After applying the Rieger Bound, we get burst error correcting efficiency Efficient BEC codes can be found by a greedy strategy that is very similar to Gilbert's greedy code construction. To construct such a BEC code, we start with an empty code. We keep adding new codewords while making sure that no added codeword differs from an existing codeword by two bursts of length $\leq l$. We stop when we can not add any more codewords.

Scheme for Burst Error Correction : Consider a q-ary code having m codewords of n letters each. Let the codewords have capacity of correcting error upto $l$. For such a codeword, the Reiger bound shows that if there are two bursts of size $l$ in distinct cosets, then the redundancy is at least $2l$. We conclude that to get a code on Reiger bound using interleaving, these codes shall be MDS(Maximum Distance Separable). This is single dimension interleaving. If we want to design two-dimensional code by interleaving MDS single error-correcting codes, then the condition for code to achieve Reiger bound is that the interleaving scheme is optimal.

Further bounds on Burst Error Correction

There is more than one upper bound on the achievable code rate of linear block codes for multiple phased-burst correction (MPBC). One such bound is constrained to a maximum correctable cyclic burst length within every subblock, or equivalently a constraint on the minimum error free length or gap within every phased-burst. This bound, when reduced to the special case of a bound for single burst correction, is the Abramson bound (a corollary of the Hamming bound for burst-error correction) when the cyclic burst length is less than half the block length.[4]

Theorem For $1 \le l \le (n+1)/2$, over a binary alphabet, there are $n2^{l-1}+1$ vectors of length $n$ which are bursts of length $\le l$.[8]

Proof Since the burst length is $\le (n+1)/2$, there is a unique burst description associated with the burst. The burst can beginning at any of the $n$ positions of the pattern. Each pattern begins with the symbol 1 and contain a length of $l$. We can think of it as the set of all strings that begin with $1$ and have length $l$. Thus, there are a total of $2^{l-1}$ possible such patterns, and a total of $n2^{l-1}$ bursts of length $\le l$. If we include the all-zero burst, we have $n2^{l-1}+1$ vectors representing bursts of length $\le l$.

Theorem If $1 \le l \le (n+1)/2$, a binary $l$-burst error correcting code code has at most $2^n/(n2^{l-1}+1)$ codewords

Proof Since $l \le (n+1)/2$, we know that there are $n2^{l-1}+1$ bursts of length $\le l$. Say the code has $M$ codewords, then there are $Mn2^{l-1}$ codewords that differ from a codeword by a burst of length $\le l$. Each of the $M$ words must be distinct, otherwise the code would have distance < $1$. Therefore, $M(2^{l-1}+1) \le 2^n$ implies $M \le \(2^n)/(n2^{l-1}+1)$ as it was needed to be shown.

Theorem (Abramson's Bounds) If $1 \le b \le (n+1)/2$ is a binary linear $[n,k] l$-burst error correcting code, its block-length must satisfy:

$n \le 2^{r-b+1} -1 $,

where $r = n-k$ is the code redundancy. An alternative formulation is

$r \ge \lceil log_2(n+1) \rceil + (b-1)$.

Proof For a linear $[n,k]$ code, there are $2^k$ codewords. By our previous result, we know that $2^k \le \frac{2^n}{(n2^{b-1}+1)}$. Isolating $n$, we get $n \le 2^{r-b+1}-2^{-b+1}$. Since $n$ must be an integer, we have $n \le 2^{r-b+1}-1$. We can rearrange this final result, to obtain our bound on $r$.

Fire Codes [3] [4] [5]


While cyclic codes in general are powerful tools for detecting burst errors, we now consider a family of binary cyclic codes named Fire Codes, which possess good single burst error correction capabilities. By single burst, say of length $l$, we mean that all errors that a received codeword possess lie within a fixed span of $l$ digits.

Let $p(x)$ be an irreducible polynomial of degree $m$ over $\mathbb{F}_2$, and let $p$ be the period of $p(x)$. The period of $p(x)$, and indeed of any polynomial, is defined to be least positive integer $r$ such that $p(x) \mid (x^r - 1)$. Let $l$ be a positive integer satisfying $l \le m$ and $2l-1$ not divisible by $p$, where $p$ is the period of $p(x)$. An $l$-burst-error correcting Fire Code $G$ is defined by the following generator polynomial: $g(x) = (x^{2l-1}+1)p(x)$.

Theorem $p(x)$ and $(x^{2l-1}+1)$ are relatively prime

Proof Assume they are not. Then let $d(x) = \textnormal{GCD}(p(x),(x^{2l-1}+1))$. Since $p(x)$ is irreducible, then $\textnormal{deg}(d(x))$ is either $0$ or $\textnormal{deg}(p(x))$. Assume $\textnormal{deg}(d(x))$ is non-zero, then $p(x) = c\;d(x)$ for some constant $c$. But, $(1/c)p(x)$ is a divisor of $(x^{2l-1}+1)$ since $d(x)$ is a divisor of $(x^{2l-1}+1)$. But this contradicts our assumption that $p(x)$ does not divide $(x^{2l-1}+1)$. Thus, $\textnormal{deg}(d(x))$ is indeed $0$ - making $p(x)$ and $(x^{2l-1}+1)$ relatively prime.

Theorem If $p(x)$is a polynomial of period $p$, then $p(x)$ divides $x^k-1$, if and only if $p \mid k$

Proof If $p \mid k$, then $x^k-1 = (x^p-1)(1 + x^p + x^{2p} + \ldots + x^{k/p})$. Thus, $p(x)$ divides $x^k-1$. Let $p(x)$ divide $x^k-1$. Then, $k \ge p$, we show that $k$ is divisible by $p$ by induction on $k$. The base case $k=p$ follows, therefore assume $k$ > $p$. We know that $p(x)$ divides both (since it has period $p$) $x^p -1 = (x-1)(1 + x + \ldots + x^{p-1})$ and $x^k - 1 = (x-1)(1 + x + \ldots + x^{k-1})$.

But $p(x)$ is irreducible, therefore it must divide both $(1 + x + \ldots + x^{p-1})$ and $(1 + x + \ldots + x^{k-1})$; thus, it also divides the difference of the last two polynomials, $x^p(1 + x + \ldots + x^{p-k-1})$. Then, it follows that $p(x)$ divides $(1 + x + \ldots + x^{p-k-1})$. Finally, it also divides: $x^{k-p}-1 = (x-1)(1 + x + \ldots + x^{p-k-1})$. By the induction hypothesis, $p \mid k-p$, then $p \mid k$.

A corollary to this theorem is that since $p(x) = x^p - 1$ has period $p$, then $p(x)$ divides $x^k-1$ if and only if $p \mid k$.

Theorem The Fire Code is $l$-burst error correcting [2,3]

If we can show that all bursts of length $l$ or less occur in different cosets, we can use them as coset leaders that form correctable error patterns. The reason is simple, we know that each coset has a unique syndrome associated with it, and if all bursts of different lengths occur in different cosets, then all have unique syndromes, facilitating error correction.

Proof Let $x^ia(x)$ and $x^jb(x)$ be polynomials with degrees $l_1-1$ and $l_2-1$, representing bursts of length $l_1$ and $l_2$ respectively. Further, $l_1 \le l$ and $l_2 \le l$. The integers $i$ and $j$ represent the starting position of the burst, and are less than the block length of the code. For contradiction sake, assume that $x^{i}a(x)$ and $x^{j}b(x)$ are in the same coset. Then, $v(x) = x^ia(x) + x^jb(x)$ is a valid codeword (since both terms are in the same coset). Without loss of generality, pick $i \le j$. By the division theorem, dividing $j-i$ by $2l-1$ yields, $j-i = g(2l-1)+r$, for integers $g$ and $r$, $0 \le r$ < $2l-1$. We rewrite the polynomial $v(x)$ as following:

$v(x) = x^ia(x) + x^{i + g(2l-1) + r}$

        $=  x^ia(x) + x^{i + g(2l-1) + r} + 2x^{i+r}b(x)$
        $= x^i(a(x) + x^bb(x)) + x^{i+r}b(x)(x^{g(2l-1)}+1)$

Notice that at the second manipulation, we introduced the term $2x^{i+r}b(x)$. We are allowed to do so, since Fire Codes operate on $\mathbf{F}_2$. By our assumption, $v(x)$ is a valid codeword, and thus, must be a multiple of $g(x)$. As mentioned earlier, since the factors of $g(x)$ are relatively prime, $v(x)$ has to be divisible by $x^{2l-1}+1$. Looking closely at the last expression derived for $v(x)$ we notice that $x^{g(2l-1)}+1$ is divisible by $x^{2l-1}+1$ (by the corollary of our previous theorem). Therefore, $a(x) + x^bb(x)$ is either divisible by $x^{2l-1}+1$ or is $0$. Applying the division theorem again, we get

$a(x) + x^bb(x) = d(x)(x^{2l-1}+1)$ 

for some polynomial $d(x)$. Let $\delta = \deg(d(x))$, thus $\deg(d(x)(x^{2l-1}+1))$ is $\delta + 2l -1$.Notice that $\deg(a(x)) = l_1 - 1$ which is clearly < $2l-1$. This means that the degree of $a(x) + x^bb(x)$ is established from the term $x^bb(x)$, which is $b + l_2 -1$. Equating the degree of both sides, gives us $b + l_2 - 1 = 2l - 1 + \delta$.

Since $l_1 \le l$ and $l_2 \le l$, subtracting $l_2$ from both sides yield: $b \ge l_1 + \delta$, which implies $b$ > $l_1 -1$ and $b$ > $\delta$. Notice that if we expand $a(x) + x^bb(x)$ we get

$1 + a_1x + a_2x^2 + \ldots + x^{l_1-1} + x^b(1 + b_1x + b_2x^2 + \ldots + x^{l_2-1})$.

In particular, notice that the term $x^b$ appears, in the above expansion. But, since $\delta$ < $b $< $2l -1$, the resulting expression $d(x)(x^{2l-1}+1)$ does not contain $x^b$, therefore $d(x) = 0$ and subsequently $a(x) + x^bb(x) = 0$. This requires that $b = 0$, and $a(x) = b(x)$. We can further revise our division of $j-i$ by $g(2l-1)$ to reflect $b=0$, that is $j-i = g(2l-1)$. Substituting back into $v(x)$ gives us,

$v(x) = x^ib(x)(x^{j-1}+1)$.

Since the degree of $b(x)$ is $l_2-1$ < $l$, we have $\deg(b(x))$ < $\deg(p(x)) = m$. But $p(x)$ is irreducible, therefore $b(x)$ and $p(x)$ must be relative prime. Since $v(x)$ is a codeword, $x^{j-1}+1$ must be divisible by $p(x)$, as it cannot be divisible by $x^{2l-1}+1$. Therefore, $j-i$ must be a multiple of $p$. But it must also be a multiple of $2l-1$, which implies it must be a multiple of $n = LCM(2l-1,p)$ but that is precisely the block-length of the code. Therefore, $j-i$ cannot be a multiple of $n$ since they are both less than $n$. Thus, our assumption of $v(x)$ being a codeword is incorrect, and therefore $x^ia(x)$ and $x^jb(x)$ are in different cosets, with unique syndromes, and therefore correctable.

Example: 5-burst error correcting Fire Code

With the theory presented in the above section, let us consider the construction of a $5$-burst error correcting Fire Code. Remember that to construct a Fire Code, we need an irreducible polynomial $p(x)$, an integer $l$, representing the burst error correction capability of our code, and we need to satisfy the property that $2l-1$ is not divisible by the period of $p(x)$. With these requirements in mind, consider the irreducible polynomial $p(x) = 1 + x^2 + x^5$, and let $l = 5$. Since $p(x)$ is a primitive polynomial, its period is $2^5 - 1 = 31$. We confirm that $2l - 1 = 9$ is no divisible by $31$. Thus, $g(x) = (x^9+1)(1 + x^2 + x^5) = 1 + x^2 + x^5 + x^9 + x^11 + x^{14}$ is a Fire Code generator. We can calculate the block-length of the code by evaluating the least common multiple of $p$ and $2l-1$. In other words, $n = LCM(9,31) = 279$. Thus, the Fire Code above is a cyclic code capable of correcting any burst of length $5$ or less.

Binary Reed Solomon Codes [1]


Certain family of codes, such as Reed Solomon, operate on alphabet sizes larger than binary. This property awards such codes powerful burst error correction capabilities. Consider a code operating on $\mathbb{F}_{2^m}$. Each symbol of the alphabet can be represented by $m$ bits. If $C$ is an $[n,k]$ Reed Solomon code over $\mathbb{F}_{2^m}$, we can think of $C$ as an $[mn,mk]_2$ code over $\mathbb{F}_{2}$.

The reason such codes are powerful for burst error correction is that each symbol is represented by $m$ bits, and in general, it is irrelevant how many of those $m$ bits are erroneous; whether a single bit, or all of the $m$ bits contain errors, from a decoding perspective it is still a single symbol error. In other words, since burst errors tend to occur in clusters, there is a strong possibility of several binary errors contributing to a single symbol error.

Notice that a burst of $(m+1)$ errors can affect at most $2$ symbols, and a burst of $2m+1$ can affect at most $3$ symbols. Then, a burst of $tm+1$ can affect at most $t + 1$ symbols; this implies that a $t$-symbols-error correcting code can correct a burst of length at most $(t-1)m+1$.

In general, a $t$-error correcting Reed Solomon code over $\mathbb{F}_{2^m}$ can correct any combination of $$\frac{t}{1+\lfloor (l+m-2)/m \rfloor}$$ or fewer bursts of length $l$, on top of being able to correct $t$-random worst case errors.

An example of a Binary RS Code

Let $G$ be a $[255,223,33]$ RS code over $GF(28)$. This code was employed by NASA in their Cassini-Huygens spacecraft [12]. It is capable of correcting $\lfloor 33/2 \rfloor = 16$ symbol errors. We now construct a Binary RS Code $G'$ from $G$. Each symbol will be written using $\lceil log_2(255) \rceil = 8$ bits. Therefore, the Binary RS code will have $[2040,1784,33]_2$ as its parameters. It is capable of correcting any single burst of length $l = 121$.

Applications


We now consider a classic application of burst error corrections, the compact disc.

Compact Disc

Without error correcting codes, digital audio would not be technically feasible.[1] The Reed Solomon codes (RS codes) can correct a corrupted symbol with a single bit error just as easily as it can a symbol with all its bits in error. This makes the RS codes particularly suitable for correcting burst errors. [5] By far, the most common application of RS codes is to compact discs. In addition to basic error correction provided by RS codes, protection against burst errors due to scratches on the disc is provided by a cross interleaver [4].

Current compact disc digital audio system was developed by N. V. Philips of The Netherlands and Sony Corporation of Japan (agreement signed in 1979).

A compact disc comprises of a 120 mm aluminized disc coated with a clear plastic coating, with spiral track, approximately 5 km in length, which is optically scanned by a laser of wavelength ~0.8 μm, at a constant speed of ~1.25 m/s. For achieving this constant speed rotation of the disc is varied from ~8 rev/s while scanning at the inner portion of the track to ~3.5 rev/s at the outer portion. Pits and lands are the depressions (0.12 μm deep) and flat segments constituting the binary data along the track (0.6 μm width). [7]

The CD process can be abstracted as a sequence of the following sub-processes: -> Channel encoding of source of signals -> Mechanical sub-processes of preparing a master disc, producing user discs and sensing the signals embedded on user discs while playing - the Channel -> Decoding the signals sensed from user discs

The process is subject to both burst errors and random errors.[1] Burst errors include those due to disc material (defects of aluminum reflecting film, poor reflective index of transparent disc material), disc production (faults during disc forming and disc cutting etc.), disc handling (scratches – generally thin, radial and orthogonal to direction of recording) and variations in play-back mechanism. Random errors include those due to jitter of reconstructed signal wave and interference in signal. CIRC (Cross-Interleaved Reed-Solomon code) is the basis for error detection and correction in the CD process. It corrects error bursts up to 3,500 bits in sequence (2.4 mm in length as seen on CD surface) and compensates for error bursts up to 12,000 bits (8.5 mm) that may be caused by minor scratches.

Encoding: Sound-waves are sampled and converted to digital form by an A/D convertor. The sound wave is sampled for amplitude (at 44.1 kHz or 44,100 pairs, one each for the left and right channels of the stereo sound). The amplitude at an instance is assigned a binary string of length 16. Thus, each sample produces two binary vectors from $\mathbb{F}_2^{16}$ or 4 $\mathbb{F}_2^{8}$ bytes of data. Every second of sound recorded results in 44,100 x 32 = 1,411,200 bits (176,400 bytes) of data.[7] The 1.41 Mbps sampled data stream passes through the error correction system eventually getting converted to a stream of 1.88 Mbps.

Input for the encoder consists of input frames each of 24 8-bit symbols (12 16-bit samples from the A/D converter, 6 each from left and right data (sound) sources). A frame can be represented by $L_1 R_1 L_2 R_2… L_6 R_6, L_i $ and $R_i$ are bytes from the left and right channels from the $i^{th}$ sample of the frame.

Initially, the bytes are permuted to form new frames represented by $L_1 L_3 L_5 R_1 R_3 R_5 L ̃_2 L ̃_4 L ̃_6 R ̃_2 R ̃_4 R ̃_6$ where $L ̃_i,R ̃_i $represent $i^{th}$ left and right samples from the frame after 2 intervening frames.

Next, these 24 message symbols are encoded using C2 (28,24,5) Reed-Solomon code which is a shortened RS code over GF(256). This is two-error-correcting, being of minimum distance 5. This adds 4 bytes of redundancy, $P_1 P_2$ forming a new frame: $L_1 L_3 L_5 R_1 R_3 R_5 P_1 P_2 L ̃_2 L ̃_4 L ̃_6 R ̃_2 R ̃_4 R ̃_6$. The resulting 28-symbol codeword is passed through a (28.4) cross interleaver leading to 28 interleaved symbols. These are then passed through C1 (32,28,5) RS code, resulting in code-words of 32 coded output symbols. Further regrouping of odd numbered symbols of a codeword with even numbered symbols of the next codeword is done to break up any short bursts that may still be present after the above 4-frame delay interleaving. Thus, for every 24 input symbols there will be 32 output symbols giving $ R= 24/32 = ¾ $. Finally one byte of control and display information is added.[7] Each of the 33 bytes is then converted to 17 bits through EFM (eight to foiurteen modulation) and addition of 3 merge bits. Therefore the frame of six samples results in 33 bytes x 17 bits (561 bits) to which are added 24 synchronization bits and 3 merging bits yielding a total of 588 bits.

Decoding: The CD player (CIRC decoder) receives the 32 output symbol data stream. This stream passes through the decoder D1 first. It is up to individual designers of CD systems to decide on decoding methods and optimize their product performance. Being of minimum distance 5 The D1,D2 decoders can each correct a combination of e errors and f erasures such that 2e+f<5. [1] In most decoding solutions, D1 is designed to correct single error. And in case of more than 1 error, this decoder outputs 28 erasures. The deinterlever at the succeeding stage distributes these erasures across 28 D2 codewords. Again in most solutions, D2 is set to deal with erasures only (a simpler and less expensive solution). If more than 4 erasures were to be encountered, 24 erasures are output by D2. Thereafter, an error concealment system attempts to interpolate (from neighboring symbols) in case of uncorrectable symbols, failing which sounds corresponding to such erroneous symbols get muted.

Performance of CIRC[1]: CIRC conceals long bust errors by simple linear interpolation. 2.5 mm of track length (~4000 bits) is the maximum completely correctable burst length. 7.7 mm track length (~12,300 bits) is the maximum burst length that can be interpolated. Sample interpolation rate is one every 10 hours at Bit Error Rate (BER) = $10^{-4}$ and 1000 samples per minute at BER = $10^{-3}$ Undetectable error samples (clicks): less than one every 750 hours at BER = $10^{-3}$ and negligible at BER = $10^{-4}$.

Interleaved Codes


Interleaving is used to convert convolutional codes used to random error correction for burst error correction.The basic idea behind use of interleaved codes is to jumble symbols at receiver. This leads to randomization of bursts of received errors which are closely located and we can then apply the analysis for random channel. Thus, the main function done by interleaver at transmitter is to alter the input symbol sequence. At the receiver, deinterleaver will alter the received sequence to get back the original unaltered sequence at transmitter.

Burst Error Correcting Capacity of Interleaver


Theorem If the burst error correcting ability of some code is $l$, then the burst error correcting ability of its $\lambda$-way interleave is $\lambda l$. Proof Suppose that we have an $(n, k)$ code that can correct all bursts of length $\leq l$. Interleaving can provide us with a $(\lambda n, \lambda k)$ code that can correct all bursts of length $\leq \lambda l$, for any given $\lambda$. If we want to encode a message of an arbitrary length using interleaving, first we divide it into blocks of length $\lambda k$. We write the $\lambda k$ entries of each block into a $\lambda \times k$ matrix using row-major order. Then, we encode each row using the $(n, k)$ code. What we will get is a $\lambda \times n$ matrix. Now, this matrix is read out and transmitted in column-major order. The trick is that if there occurs a burst of length $h$ in the transmitted word, then each row will contain approximately $\frac{h}{\lambda}$ consecutive errors (More specifically, each row will contain a burst of length at least $\lfloor\frac{h}{\lambda}\rfloor$ and at most $\lceil\frac{h}{\lambda}\rceil$). If $h \leq \lambda l$, then $\frac{h}{\lambda} \leq l$ and the $(n, k)$ code can correct each row. Therefore, the interleaved $(\lambda n, \lambda k)$ code can correct the burst of length $h$. Conversely, if $h$ > $\lambda l$, then at least one row will contain more than $\frac{h}{\lambda}$ consecutive errors, and the $(n, k)$ code might fail to correct them. Therefore, the error correcting ability of the interleaved $(\lambda n, \lambda k)$ code is exactly $\lambda l$. The BEC efficiency of the interleaved code remains the same as the original $(n, k)$ code. This is true because:

Block Interleaver


Below figure shows a 4 by 3 interleaver. https://wiki.cse.buffalo.edu/cse545/sites/wiki.cse.buffalo.edu.cse545/files/96/Complete_ID.jpg

The above interleaver is called as a block interleaver. In this, the input symbols are written sequentially in the rows and the output symbols are obtained by reading the columns sequentially. Thus, this is in form of M X N array. Generally, N is length of the codeword.

Capacity of Block Interleaver: For $M X N$ block interleaver and burst of length $l$, upper limit on number of errors = . This is obvious from the fact that we are reading the output column wise and number of rows is $M$/ By theorem Burst Error Correcting Capacity of Interleaver stated above, for error correction capacity upto $t$, maximum burst length allowed = $Mt$ For burst length of $Mt+1$,decoder may fail.

Efficiency of Block Interleaver (): It is found by taking ratio of burst length where decoder may fail to the interleaver memory. Thus, we can formulate as

Drawbacks of Block Interleaver : As it is clear from the figure, the columns are read sequentially, the receiver can interpret single row only after it receives complete message and not before that. Also, receiver requires considerable amount of memory in order to store the received symbols and has to store complete message. Thus, these factors give rise to two drawbacks, one is the latency and other is the storage (fairly large amount of memory). These drawbacks can be avoided using the convolution interleaver described below.


Convolutional interleaver OR Cross interleaver


Cross interleaver is a kind of multiplexer-demultiplexer system. In this system, delay lines are used to progressively increase length. Delay line is basically and eclectronic circuit used to delay the signal by certain time duration. Let $n$ be the number of delay lines and $d$ be the number of symbols introduced by each delay line. Thus, the separation between consecutive inputs = $nd$ symbols Let, the length of codeword ≤ $n$. Thus, each symbol in the input codeword will be on distinct delay line. Let, burst error of length $l$ occur. Since the separation between consecutive symbols is $nd$, the number of errors that deinterleaved output may contain is By theorem Burst Error Correcting Capacity of Interleaver stated above, for error correction capacity upto $t$, maximum burst length allowed = $(nd+1)(t-1)$ For burst length of $(nd+1)(t-1)+1$,decoder may fail. https://wiki.cse.buffalo.edu/cse545/sites/wiki.cse.buffalo.edu.cse545/files/96/Interleaver_Cross_1.jpg https://wiki.cse.buffalo.edu/cse545/sites/wiki.cse.buffalo.edu.cse545/files/96/Deinterleaver_Cross_1.jpg Efficiency of Cross Interleaver (): It is found by taking ratio of burst length where decoder may fail to the interleaver memory. In this case, memory of interleaver can be calculated as (0 + 1 + 2 + 3 + ..... + (n-1))d = Thus, we can formulate as

Performance of cross interleaver : As shown in the above interleaver figure, the output is nothing but the diagonal symbols generated at the end of each delay line. In this case, when the input multiplexer switch completes around half switching, we can read first row at the receiver. Thus, we need to store maximum of around half message at receiver in order to read first row. This drastically brings down the storage requirement by half. Since just half message is now required to read first row, the latency is also reduced by half which is good improvement over the block interleaver. Thus, the total interleaver memory is split between transmitter and receiver.

Practical Interleaver Analysis


The analysis of interleaver carried out in Matlab is here.

References


[1] Algebraic Error Control Codes (Autumn 2012) - Handouts from Stanford University [2] The Theory of Information and Coding: Student Edition - By R. J. McEliece [3] Moon, Todd K. Error Correction Coding: Mathematical Methods and Algorithms. Hoboken, NJ: Wiley-Interscience, 2005. Print [4] Ling, San, and Chaoping Xing. Coding Theory: A First Course. Cambridge, UK: Cambridge UP, 2004. Print [5] Lin, Shu, and Daniel J. Costello. Error Control Coding: Fundamentals and Applications. Upper Saddle River, NJ: Pearson-Prentice Hall, 2004. Print [6] Huffman, William Cary., and Vera Pless. Fundamentals of Error-correcting Codes. Cambridge, U.K.: Cambridge UP, 2003. Print. [7] McEliece, Robert J. The Theory of Information and Coding: A Mathematical Framework for Communication. Reading, MA: Addison-Wesley Pub., Advanced Book Program, 1977. Print [8] Coding Bounds for Multiple Phased-Burst Correction and Single Burst Correction Codes [9] Reed Solomon Codes - by Joel Sylvester [10] K.A.S. Immink, Reed–Solomon Codes and the Compact Disc in S.B. Wicker and V.K. Bhargava, Edrs, Reed–Solomon Codes and Their Application


[1] Error Control Codes used in CD [2] http://en.wikipedia.org/wiki/Error_detection_and_correction [3] http://en.wikipedia.org/wiki/Burst_error [4] http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=613362&userType=inst [5] http://webcache.googleusercontent.com/search?q=cache:http://quest.arc.nasa.gov/saturn/qa/cassini/Error_correction.txt