Bad sector

From Wikipedia, the free encyclopedia
Jump to: navigation, search
This article is about a hard drive fault. For the ambient/noise music project, see Bad Sector. For the Linux utility, see Badblocks.

A bad sector is a sector on a computer's disk drive or flash memory that is either inaccessible or unwriteable due to permanent damage, such as physical damage to the disk surface or failed flash memory transistors. Bad sectors are usually detected by a disk utility software such as CHKDSK or SCANDISK on Microsoft systems, or badblocks on Unix-like systems. When found, these programs may mark the sectors unusable (most file systems contain provisions for bad-sector marks) and the operating system skips them in the future.

If any of the files uses a sector which is marked as 'bad' by disk utility then the bad sector of the file is remapped to a free sector and any unreadable data is lost. To avoid file corruption data recovery methods should be performed first if bad sectors are found (before being marked) by OS at file system level.

When a sector is found to be bad or unstable by the firmware of a disk controller, the disk controller remaps the logical sector to a different physical sector. In the normal operation of a hard drive, the detection and remapping of bad sectors should take place in a manner transparent to the rest of the system and in advance before data is lost. It should be remembered, however, that the damaging of the physical body of the hard drive does not solely affect one area of the data stored. Very often physical damages can interfere with parts of many different files.

There are two types of remapping by disk hardware: P-LIST (Mapping during factory production tests) and G-LIST (Mapping during consumer usage by disk microcode).[1]

There are a variety of utilities that can read the Self-Monitoring, Analysis, and Reporting Technology (SMART) information to tell how many sectors have been reallocated, and how many spare sectors the drive may still have.[2] Because reads and writes from G-list sectors are automatically redirected (remapped) to spare sectors it slows down drive access even if data in drive is defragmented. If the G-list is filling up, it is time to replace the drive.[3]

Typically, automatic remapping of sectors only happens when a sector is written to. The logic behind this is presumably that even if a sector cannot be read normally, it may still be readable with data recovery methods. However, if a drive knows that a sector is bad and the drive's controller receives a command to write over it, it will not reuse that sector and will instead remap it to one of its spare-sector regions.[citation needed] This may be the reason why hard disks continue to have sector errors (mostly disk controller timeouts) until all the bad sectors are remapped; typically this is accomplished by writing zeros to the entire drive. See the SMART attribute number 197 ("Current Pending Sector Count") for more information.[4]

Copy protection[edit]

Further information: Copy protection § Early ages

In the 1980s, many software vendors mass-produced floppy disks for distribution to users of home computers that had bad sectors deliberately introduced. The disk drives for these computers would not read the sector: the header information may be duplicated so that different data was read at each pass from different physical sectors with the same headers, or the data in the sector would not be read correctly by the head, and various other techniques described above. The home computer equipment could only write "good" sectors, so that attempts to copy the disk were flawed either because:

  • A sector was deliberately made "bad" so that the disk controller would attempt to read it several times, generally requiring one complete revolution of the media ("spin") for each attempt. This made reading slow, and the read would complete eventually indicating an error, were the disk legitimate. Were it a copy, it would complete quickly indicating a successful read: but this then proved it was a copy, made without the deliberate bad sector.
  • The same header information was present on the same track more than once for the sector, typically half a spin (180°) apart, depending on the slew rate of the disk and the expected interleaving by the operating system. (Typically, disks are laid out so that the "next" sector to read will be about to pass the head just as the software asks for it.) So the head would read the "same" sector with different information, since two copies were available diametrically opposite and the disk head would see either of the two, depending on when it was asked.
Generally, because of variations in spin speed, the request was made three or four times until different results were (or were not) achieved. Again, if the same data were retrieved every time, the disk was a copy; if different data were obtained, it was an original. In both cases the data was successfully read, so a simple XOR of the two (or similar) could then be used to give a comparison against a known string of characters, so that not only did the data have to differ, but had to differ in an exact bit pattern.

These techniques could generally be easily circumvented since the code to read the bad sectors was usually in the bootstrap loader on the disk itself, so by reverse engineering and rewriting the bootstrap loader, it would not look for the bad sectors, and the comparison for a known bit pattern would have to be encoded there, too.


There were legitimate uses for doing so using a purchased disk as the master, and the hacked copy as the slave:

  • Loading time was much faster on the copy disk with its copy protection disabled.
  • The disk image could be copied to faster devices
  • Without doing so, a home user could not make a backup copy.

There are still legitimate reasons for use in software archaeology, where the original disk drives are rarely available, and modern computers run too fast to preserve the delicate timing characteristics these techniques often relied on.

See also[edit]


External links[edit]