Talk:Forward error correction
|Forward error correction has been listed as a level-4 vital article in Technology. If you can improve it, please do. This article has been rated as Start-Class.|
|WikiProject Telecommunications||(Rated Start-class)|
|Text from this version of interleaving was copied or moved into forward error correction with this edit. The former page's history now serves to provide attribution for that content in the latter page, and it must not be deleted so long as the latter page exists. The former page's talk page can be accessed at Talk:Interleaving.|
- 1 Software patents
- 2 Weird
- 3 Incomplete text
- 4 What is forward?
- 5 Merge?
- 6 Retransmission !?
- 7 Most recent development?
- 8 Relevance of reference
- 9 Dubious
- 10 MLC NAND
- 11 FEQ vs. FEC
- 12 Simplified Layman's terms explanation
- 13 Rewrite
- 14 2/3 FEC and 3/4 FEC
- 15 list of error-correcting codes
- 16 Codeword confusion
I have heard that some methods for FEC are covered by software patents. Does anyone know if this is (still) the case? If so, it should probably be mentioned in the article. Haakon 13:06, 17 Jan 2005 (UTC)
From the article:
The original information may or may not appear in the encoded output; codes that include the unmodified input in the output are systematic, while those that do not are nonsystematic..
Ehm, if the original information did not appear in the encoded output, then what sense should such an encoded output make? We couldn't reconstruct the original information back from the encoded output... or do you mean "the original information may not appear in plain text in the encoded output"?
- Yes, exactly -- the original sequence of bits always occurs in the output of a systematic encoder, but the original sequence of bits almost never occurs in the output of a nonsystematic decoder (although, of course, the information can still be extracted). Please improve the article to make this easier to understand. --184.108.40.206 07:31, 16 July 2007 (UTC)
- Think you mean that the original sequence of bits almost never occurs in the output of a nonsystematic _encoder_. The job of the decoder is to extract the original information, perhaps correct it, and flag whether (uncorrectable) errors have occurred. — Preceding unsigned comment added by 220.127.116.11 (talk) 21:58, 14 September 2011 (UTC)
From the article: Nearly all block codes apply the algebraic properties of. The algebraic properties of what?
[Turbo Codes] perform to within a fraction of a decibel of the Shannon limit. Does that mean it is even below the Shannon limit ("just a fraction of the Shannon limit"), or very close to it, but still above it?
- -no, it approaches the Shannon limit but can not surpass it (very close to it, but still above it) 16:28, 7 January 2006 (UTC)
What is forward?
What is so "forward" on the "forward error correction" codes?
Thanks for your help, --Abdull 05:29, 16 October 2005 (UTC)
- In this context, forward refers to when you can start performing the error correction: you can correct bits "going forward from here". Some error correction algorithms require te entire message to be received in order for the ECC algorithm to be applied to it. FEC algorithms generally allow you to fix errors as the message comes in, bit by bit, rather than analysiing the entire message. A general ECC algorithm may require looking at "future" bits, yet to be received, as well as "past" bits, to correct the current bit. The Viterbi algorithm is famous in that it took a particularly strong form of ECC, and got rid of the part that depended on future, as-yet-unrecieved bits. linas 17:40, 26 May 2006 (UTC)
I've heard an entirely different definition of what "forward" means. I've been told that "forward" refers to the direction of transmission: "forward error correction" techniques allow the receiver to correct errors, even though all communication goes in the "forward" direction from the transmitter to the reciever.
The only other kind of error correction technique, "repeat request" (ARQ), requires 2-way communication, both the "forward" and "reverse" directions.
Certainly the Viterbi algorithm is "forward" under both these definitions. I would call Cross-interleaved Reed-Solomon coding a kind of "forward error correction", even though it requires analyzing an entire 28 byte block (or is it 784 bytes?) before fixing an error in the first bit. If you want a narrower definition of FEC that excludes Reed-Solomon coding, then what would you prefer to call the wider group of error correction codes that includes Reed-Solomon codes? --18.104.22.168 22:08, 23 May 2007 (UTC)
Well, I disagree with the above explanation for forward ( you can correct bits "going forward from here" ). Using this explanation, turbo codes have nothing to do with FEC, but they are discussed in this article.
- I vote Do not merge - the article on touches upon Forward Error Correction and its applications, and can be expanded considerably. Rather than mergeing into a general Error detection and correction article, I believe it is correct to split this topic out. When treated properly, this article alone will generate several sub-articles. WLD 20:26, 5 February 2006 (UTC)
- I agree: Do not merge, but clean this article up so that it is more clear about the difference between forward error correction and error correction. Mangojuice 19:13, 3 March 2006 (UTC)
- Do not merge. FEC is only one type of ECC; there are many types of ECC. linas 17:34, 26 May 2006 (UTC)
I reverted the recent change about "retransmission"; the concpet of "retransmission" is a very different concept, and it only applies to the the retransmission of entire messages or packets. Its at a whole different level in the communications process, its rather totally unrelated. By contrast, in FEC, there's usually no real "packet" or message boundaries, there's just a stream of incoming bits.
The improtant distinction here is between forward correction, and plain-old bi-directional correction, such as CRC. In CRC, you need to have *all* of the bits of the packet received, so that you can compute the CRC of the whole packet. The difference between forward corection and plain-old correction is that in forward correction, it is only the bits in the past (the bits to the "left" of "here") that are used to correct the "current" bit. In plain-old or "bi-directional" correction, bits both to the left and to the right are used to come up with the correct message. linas 18:44, 26 May 2006 (UTC)
- I re-wrote again. Perhaps we can work together to improve this article? We need to be clear about the distinction between error-detecting codes (such as CRC), error-detecting & correcting codes (such as Hamming), and Forward Error Correction codes. The 'simple' example of a FEC needs improving as well, as the example is actually a Hamming code (I believe) - my fault, as I put it there - if you have a simple example of a FEC, then by al means put it in. WLD 19:32, 26 May 2006 (UTC)
- Oh - and it's worth pointing out that FECs remove the need for retransmissions - which is especially important in Deep-space missions, and broadcast applications, where re-transmissions are infeasible. Is there an example of FECs being used in VoIP i wonder - it should be a good area to use them in, so long as the latency of the FEC process is not too high. On another note, you used terms like 'more effective','fast' & 'lightweight', which need qualifying - more effective - how? fast - in what way? lightweight - in what way. Regards, WLD 19:37, 26 May 2006 (UTC)
- I see one proposal that seems to simply blindly send each packet of VOIP data twice , hoping that at least one will make it through. The nice thing about this idea is that it looks like it is backwards-compatible with systems that don't use it -- standard UDP will ignore duplicate packets; and if the first one doesn't make it, UDP will accept the second one. This "forward error correction" article needs to distinguish that sort of "blindly send" thing from the "retransmission" used in a ARQ system.
- To repeat Talk:Forward_error_correction#What_is_forward.3F, my understanding is that "bi-directional correction, such as CRC" is a synonym for ARQ. The error detection and correction explicitly states that there are only 2 kinds of error correction, "automatic repeat-request" and "error-correcting code". Since "error-correcting code" currently redirects to "forward error correction", if one is a subset of, or "distinct from" the other, this article needs to make that clear. As far as I can tell from the "error detection and correction" article and the "forward error correction" article, "forward error correction", "error-correcting code", and "systems that can correct errors without ever sending feedback back to the source" are all synonyms. --22.214.171.124 (talk) 19:00, 13 December 2007 (UTC)
Most recent development?
The article says:
The most recent (early 1990s) development in error correction is turbo coding,
Have their been any other major developments since that or are their likely to be any more in the future?
--Timmywimmy 02:59, 13 July 2006 (UTC)
You can refer to http://www.ietf.org/html.charters/rmt-charter.html --Sireesh 04:41, 19 December 2006 (UTC)
Relevance of reference
The edit dated 29 July 2006 added the following citation:
- "Forget Sudoku and smile for the camera". 2006. Retrieved 5 December 2006.
While I find this article very interesting, I don't see a clear connection to FORWARD error correction. Am I missing something? Ged Davies 17:37, 28 February 2007 (UTC)
I agree -- this particular article doesn't say anything about errors, much less error correction. So leave it out. (Or move it to the phase problem article).
However, since Sudoku is much more widely known than error correction codes, Sudoku might make a good analogy for a "layman's introduction" Talk:Forward_error_correction#Simplified_Layman.27s_terms_explanation.
The Turbo_code#Solving_hypotheses_to_find_bits says that correcting errors in interleaved turbo codes is analogous to solving Sudoku. The Latin square article claims that "Latin squares ... have applications in ... in error correcting codes. ... The popular Sudoku puzzles are a special case of Latin squares". Is it merely alluding to the previous interleaved turbo code, or is there a more general application of Latin squares to error correcting codes -- not merely an analogy? --126.96.36.199 (talk) 19:00, 13 December 2007 (UTC)
The article currently claims
- "More dense multi level cell (MLC) NAND requires stronger multi-bit correcting ECC such as BCH or Reed-Solomon"
I think this is incorrect. If I read "Choosing flash memory" article correctly, it seems to say that SLC may be reliable enough to not need any error correction (in some applications). In contrast, MLC has a much higher error rate. "However, this is easily compensated for by using error detection and correction codes (EDC). System designers have long been aware of the benefits of using EDC to detect and correct errors in systems using Hamming codes (common in memory subsystems) and Reed Solomon codes (common in hard drives and CD-ROMs)."
As far as I can tell (original research), single-bit-correcting Hamming codes should work fine with MCL flash, as long as the multiple bits stored in a single cell are spread across *different* Hamming code words. --188.8.131.52 22:08, 23 May 2007 (UTC)
While it is true that SLC NAND has a generally lower bit error rate than MLC NAND, both technologies' bit error rates are dramatically affected by how the data is accessed. If the point here is to give some examples of several types of FEC, and their applications, perhaps this paragraph could be made more general - to indicate that Hamming Code is suitable for relatively low bit error rate channels, while other schemes such as Reed-Soloman are better suited to higher bit error rate channels. I guess what I am trying to suggest is that NAND memory technology is changing rapidly - potentially invalidating this paragraph. Just a thought... Tpfinkbeiner (talk) 18:23, 13 February 2011 (UTC)
- Just added 2 refs (eeasia and note by one of flash memory supplier). They says that SLC often uses Hamming; and MLC often uses RS or BCH. May be we should rewrite the phrase as: "More dense multi level cell (MLC) NAND often used with stronger multi-bit correcting ECC such as BCH or Reed-Solomon"
FEQ vs. FEC
- FEQ isn't an abbreviation that matches anything to FEC that I know of or can find a reference to. I just proposed FEQ for deletion. Danoelke (talk) 15:46, 27 February 2008 (UTC)
Simplified Layman's terms explanation
This article needs a simple layman's terms explanation. I wrote the following based on how I understand error correction to work, but I'm far from an expert. So I'm gonna post my proposed article text here and let someone else post it once an expert confirms the validity.
Error correction works sort of like a game of sudoku. If a single number is missing from any random location on nearly complete sudoku board, then any one of the relational number rules of sudoku provide all the extra information you need to correctly fill in that particular missing number box back to its intended value. However, as more and more numbers are missing you need to know more and more different sudoku rules in order to recreate the missing numbers.
So the original data to be kept corrected of errors is like the numbers you start with on an incomplete sudoku board, and the additional error correction parity files you use to correct errors in your data back to the original set are like the rules you use on a sudoku board to fill in the missing number squares back to their intended values.
If too many squares on your sudoku board are missing (i.e. there are too many errors in your data) then the rules of sudoku don't give you enough information to fill in the missing numbers (i.e. the parity files don't contain enough extra information to recreate the errors in your data). But if you knew additional sudoku rules (i.e. had more parity files) then you could recreate the correct set of numbers on your sudoku board (i.e. you could recreate the correct data in the portions which contain errors).
This article does poor justice to the fantastic depth and quality of the subject. Take a look at how good Information Theory article reads. -Muthu —Preceding unsigned comment added by 184.108.40.206 (talk) 19:20, 30 September 2007 (UTC)
Would you be so kind as to point out a few ways the article could be improved? Is something missing? Please add it. Unnecessarily complicated-sounding? Or ... something else? --220.127.116.11 (talk) 19:00, 13 December 2007 (UTC)
I agree that this article needs (almost) a complete rewrite. Missing:
- Reference to early theory on BCH codes, repetition codes
- Introduction to Shannon's work and channel coding theorem, beginning of Information theory
- Hamming codes
- Principle of sphere-packing in codes, Hamming distance, minimum distance code, properties regarding error correction and detection
- Various useful/popular bounds, such as Hamming bound, Singleton bound, Gilbert–Varshamov bound, etc.
- Problem of maximum likelihood decoding
- Introduction to theory of linear codes
- Classes of codes, and their subset relation: e.g., block codes -> linear codes -> cyclic codes -> BCH codes -> RS codes
I'm sure I'm still missing a lot, but this came to my mind right away... Anyway, I'm sure I'll not be able to tackle this (rather big) task anytime soon, but maybe someone else will give it a shot.
2/3 FEC and 3/4 FEC
list of error-correcting codes
While I think this edit improved both articles, I think Wikipedia:WikiProject Laundromat and Wikipedia:Embedded list seems to discourage such lists, and I'm thinking about deleting that list entirely.
Is there any good reason to manually maintain that list in this article, instead of using the automatically-updated category mechanism -- similar to Category:Checksum algorithms? Is the list at Category:Error detection and correction adequate for our readers, or should we use a more specific category for that list? --18.104.22.168 (talk) 16:45, 25 January 2010 (UTC)
- The list was moved here temporarily, as the article needs a rewrite anyway. However, do not just delete the list, but rather move it to a list article like "List of forward error correction codes". Also, an appropriate category would be "Forward error correction codes", and be defined as a sub-category of "Forward error correction", which itself is a sub-category of Category:Error detection and correction. Category "Forward error correction" would include all the theoretical background articles on the topic. I would be glad if you could do these changes, and provide a link to the list article in the "See also" section. Nageh (talk) 22:23, 25 January 2010 (UTC) 12:53, 26 January 2010 (UTC)
- Also, all the articles referenced in the list should be moved to the new category from Category:Error detection and correction. Nageh (talk) 22:26, 25 January 2010 (UTC)
Let's get this straight.
One interleaving example cites "aaaa, eeee, ffff, gggg" as codewords. i.e. a codeword is represented as four consecutive letters.
Another example cites "cddd" as a codeword. i.e. the codeword is inexplicably made of different letters. Apparently this is "correct".
Since "codeword" seems to be randomly defined, there needs to be an explanation of these examples, otherwise it makes no sense.
What's the significance of groups of four letters if you can just make up whatever boundaries you like? If the letter boundaries aren't codeword boundaries, then what are they?
- Yeah, I read it that way too. I've changed it again, with elaboration in the text. —Quondum 22:05, 7 May 2014 (UTC)