Talk:Run-length limited

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Computing (Rated Start-class)
WikiProject icon This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
Start-Class article Start  This article has been rated as Start-Class on the project's quality scale.
 ???  This article has not yet received a rating on the project's importance scale.

are bytes encoded individually or as a stream?????

The question is unclear. BITS are encoded as a stream.


This article is silly. All magnetic media recording codes are RLL codes. If it imposes a limit on the run length, it's RLL! For example, FM is a rate-1/2 (0,1) RLL code and MFM is a rate-1/2 (1,3) RLL code. Group Code Recording is, as the article says, a rate-4/5 (0,2) RLL code. Other popular ones are the rate-1/2 (2,7) RLL code (the original "RLL") and the rate-2/3 (1,7) RLL code.

Then folks developed PRML techniques that made (0,k) RLL codes more feasible, and that's what everyone uses on hard drives these days.

A lot of these techniques were developed at IBM, who were always pushing storage capacity limits in their mainframe heyday.

The other thing that needs to be mentioned is that all such codes assuming NRZI encoding afterwards, so a 1 bit is a transition, and a 0 bit is no transition. The minimum spacing between transitions d by (n+1 in an (n,k) code) is limited by the high-frequency response of the channel, while the laximum spacing (k+1 in an (n,k) RLL code) is limited by the clock-recovery jitter. (Even if the electronics have zero jitter, there is some in the source signal.)


First we have this statement: Run length limited codes were widely used in hard disk drives until the mid-1980's

Then this statement: Early disk drives used very simple encoding schemes, such as RLL (0,1) FM code, but higher density RLL (2,7) and RLL (1,7) codes became the de facto industry standard for hard disks by the early 1990s.

The first statement suggests that RLL was not widely used in hard drives after the mid 80s, whereas the second suggests they have been the defacto standard since the early 90s.

How can both of these statements be true?

John Elson (talk) 16:57, 15 March 2011 (UTC)

Because this is wikipedia! In all seriousness, this article is a lot of words with not much use. And no, I won't improve it, because I'm not an expert; if I were, I would not have come here to try to find out what RLL _IS_, which I walk away still not knowing. — Preceding unsigned comment added by (talk) 23:59, 29 July 2011 (UTC)

Math error[edit]

a speed variation of even 0.01% - which is way better than what e.g. a floppy drive can possibly guarantee - could result in four bits being added to or removed from the 4,096 bit data stream. The math is wrong: 4 bits out of 4096 is 0.1%. I don't know how 0.1% compares to what a floppy drive can possibly guarantee, so I don't know how to fix this sentence.

Armin Rigo (talk) 22:00, 3 May 2012 (UTC)

In the specification for the Teac FD235HF floppy drive, the Long Term Speed Variation is given as +/-1.5%, and the Instantaneous Speed Variation as +/-2%, so 0.1% is far less than either. I've amended the percentage to 0.1% which is correct for 4 bytes in 4,096.

Kletzmer (talk) 17:57, 6 April 2013 (UTC)