Jump to content

Data remanence

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Guy Macon (talk | contribs) at 18:23, 29 November 2016 (See Wikipedia:Conflict of interest/Noticeboard#Promotion, or useful links?. Undid revision 751299932 by Peterhoneyman (talk)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Data remanence is the residual representation of digital data that remains even after attempts have been made to remove or erase the data. This residue may result from data being left intact by a nominal file deletion operation, by reformatting of storage media that does not remove data previously written to the media, or through physical properties of the storage media that allow previously written data to be recovered. Data remanence may make inadvertent disclosure of sensitive information possible should the storage media be released into an uncontrolled environment (e.g., thrown in the trash or lost).

Various techniques have been developed to counter data remanence. These techniques are classified as clearing, purging/sanitizing, or destruction. Specific methods include overwriting, degaussing, encryption, and media destruction.

Effective application of countermeasures can be complicated by several factors, including media that are inaccessible, media that cannot effectively be erased, advanced storage systems that maintain histories of data throughout the data's life cycle, and persistence of data in memory that is typically considered volatile.

Several standards exist for the secure removal of data and the elimination of data remanence.

Causes

Many operating systems, file managers, and other software provide a facility where a file is not immediately deleted when the user requests that action. Instead, the file is moved to a holding area, to allow the user to easily revert a mistake. Similarly, many software products automatically create backup copies of files that are being edited, to allow the user to restore the original version, or to recover from a possible crash (autosave feature).

Even when an explicit deleted file retention facility is not provided or when the user does not use it, operating systems do not actually remove the contents of a file when it is deleted unless they are aware that explicit erasure commands are required, like on a solid-state drive. (In such cases, the operating system will issue the Serial ATA TRIM command or the SCSI UNMAP command to let the drive know to no longer maintain the deleted data.) Instead, they simply remove the file's entry from the file system directory, because this requires less work and is therefore faster, and the contents of the file—the actual data—remain on the storage medium. The data will remain there until the operating system reuses the space for new data. In some systems, enough filesystem metadata are also left behind to enable easy undeletion by commonly available utility software. Even when undelete has become impossible, the data, until it has been overwritten, can be read by software that reads disk sectors directly. Computer forensics often employs such software.

Likewise, reformatting, repartitioning, or reimaging a system is unlikely to write to every area of the disk, though all will cause the disk to appear empty or, in the case of reimaging, empty except for the files present in the image, to most software.

Finally, even when the storage media is overwritten, physical properties of the media may permit recovery of the previous contents. In most cases however, this recovery is not possible by just reading from the storage device in the usual way, but requires using laboratory techniques such as disassembling the device and directly accessing/reading from its components.[citation needed]

The section on complications gives further explanations for causes of data remanence.

Countermeasures

There are three levels commonly recognized for eliminating remnant data:

Clearing

Clearing is the removal of sensitive data from storage devices in such a way that there is assurance that the data may not be reconstructed using normal system functions or software file/data recovery utilities.[citation needed] The data may still be recoverable, but not without special laboratory techniques.[1]

Clearing is typically an administrative protection against accidental disclosure within an organization. For example, before a hard drive is re-used within an organization, its contents may be cleared to prevent their accidental disclosure to the next user.

Purging

Purging or sanitizing is the removal of sensitive data from a system or storage device with the intent that the data can not be reconstructed by any known technique.[citation needed] Purging, proportional to the sensitivity of the data, is generally done before releasing media beyond control, such as before discarding old media, or moving media to a computer with different security requirements.

Destruction

The storage media is made unusable for conventional equipment. Effectiveness of destroying the media varies by medium and method. Depending on recording density of the media, and/or the destruction technique, this may leave data recoverable by laboratory methods. Conversely, destruction using appropriate techniques is the most secure method of preventing retrieval.

Specific methods

Overwriting

A common method used to counter data remanence is to overwrite the storage media with new data. This is often called wiping or shredding a file or disk, by analogy to common methods of destroying print media, although the mechanism bears no similarity to these. Because such a method can often be implemented in software alone, and may be able to selectively target only part of the media, it is a popular, low-cost option for some applications. Overwriting is generally an acceptable method of clearing, as long as the media is writable and not damaged.

The simplest overwrite technique writes the same data everywhere—often just a pattern of all zeros. At a minimum, this will prevent the data from being retrieved simply by reading from the media again using standard system functions.

In an attempt to counter more advanced data recovery techniques, specific overwrite patterns and multiple passes have often been prescribed. These may be generic patterns intended to eradicate any trace signatures, for example, the seven-pass pattern: 0xF6, 0x00, 0xFF, random, 0x00, 0xFF, random; sometimes erroneously[clarification needed] attributed to the US standard DOD 5220.22-M.

One challenge with an overwrite is that some areas of the disk may be inaccessible, due to media degradation or other errors. Software overwrite may also be problematic in high-security environments which require stronger controls on data commingling than can be provided by the software in use. The use of advanced storage technologies may also make file-based overwrite ineffective (see the discussion below under Complications).

There are specialized machines and software that are capable of doing overwriting. The software can sometimes be a standalone operating system specifically designed for data destruction. There are also machines specifically designed to wipe hard drives to the department of defense specifications DOD 5220.22-M.[citation needed]

Feasibility of recovering overwritten data

Peter Gutmann investigated data recovery from nominally overwritten media in the mid-1990s. He suggested magnetic force microscopy may be able to recover such data, and developed specific patterns, for specific drive technologies, designed to counter such.[2] These patterns have come to be known as the Gutmann method.

Daniel Feenberg, an economist at the private National Bureau of Economic Research, claims that the chances of overwritten data being recovered from a modern hard drive amount to "urban legend".[3] He also points to the "18½ minute gap" Rose Mary Woods created on a tape of Richard Nixon discussing the Watergate break-in. Erased information in the gap has not been recovered, and Feenberg claims doing so would be an easy task compared to recovery of a modern high density digital signal.

As of November 2007, the United States Department of Defense considers overwriting acceptable for clearing magnetic media within the same security area/zone, but not as a sanitization method. Only degaussing or physical destruction is acceptable for the latter.[4]

On the other hand, according to the 2006 NIST Special Publication 800-88 (p. 7): "Studies have shown that most of today’s media can be effectively cleared by one overwrite" and "for ATA disk drives manufactured after 2001 (over 15 GB) the terms clearing and purging have converged."[5] An analysis by Wright et al. of recovery techniques, including magnetic force microscopy, also concludes that a single wipe is all that is required for modern drives. They point out that the long time required for multiple wipes "has created a situation where many organisations ignore the issue all together – resulting in data leaks and loss."[6]

Degaussing

Degaussing is the removal or reduction of a magnetic field of a disk or drive, using a device called a degausser that has been designed for the media being erased. Applied to magnetic media, degaussing may purge an entire media element quickly and effectively.

Degaussing often renders hard disks inoperable, as it erases low-level formatting that is only done at the factory during manufacturing. In some cases, it is possible to return the drive to a functional state by having it serviced at the manufacturer. However, some modern degaussers use such a strong magnetic pulse that the motor that spins the platters may be destroyed in the degaussing process, and servicing may not be cost-effective. Degaussed computer tape such as DLT can generally be reformatted and reused with standard consumer hardware.

In some high-security environments, one may be required to use a degausser that has been approved for the task. For example, in US government and military jurisdictions, one may be required to use a degausser from the NSA's "Evaluated Products List".[7]

Encryption

Encrypting data before it is stored on the media may mitigate concerns about data remanence. If the decryption key is strong and carefully controlled, it may effectively make any data on the media unrecoverable. Even if the key is stored on the media, it may prove easier or quicker to overwrite just the key, vs the entire disk. This process is called crypto erase in the security industry.[8]

Encryption may be done on a file-by-file basis, or on the whole disk. Cold boot attacks are one of the few possible methods for subverting a full-disk encryption method, as there is no possibility of storing the plain text key in an unencrypted section of the medium. See the section Complications: Data in RAM for further discussion.

Other side-channel attacks (such as keyloggers, acquisition of a written note containing the decryption key, or rubber hose cryptography) may offer a greater chance to success, but do not rely on weaknesses in the cryptographic method employed. As such, their relevance for this article is minor.

Media destruction

The pieces of a physically destroyed hard disk drive.

Thorough destruction of the underlying storage media is the most certain way to counter data remanence. However, the process is generally time-consuming, cumbersome, and may require extremely thorough methods, as even a small fragment of the media may contain large amounts of data.

Specific destruction techniques include:

Complications

Inaccessible media areas

Storage media may have areas which become inaccessible by normal means. For example, magnetic disks may develop new bad sectors after data has been written, and tapes require inter-record gaps. Modern hard disks often feature reallocation of marginal sectors or tracks, automated in a way that the operating system would not need to work with it. The problem is especially significant in solid state drives (SSDs) that rely on relatively large relocated bad block tables. Attempts to counter data remanence by overwriting may not be successful in such situations, as data remnants may persist in such nominally inaccessible areas.

Advanced storage systems

Data storage systems with more sophisticated features may make overwrite ineffective, especially on a per-file basis. For example, journaling file systems increase the integrity of data by recording write operations in multiple locations, and applying transaction-like semantics; on such systems, data remnants may exist in locations "outside" the nominal file storage location. Some file systems also implement copy-on-write or built-in revision control, with the intent that writing to a file never overwrites data in-place. Furthermore, technologies such as RAID and anti-fragmentation techniques may result in file data being written to multiple locations, either by design (for fault tolerance), or as data remnants.

Wear leveling can also defeat data erasure, by relocating blocks between the time when they are originally written and the time when they are overwritten. For this reason, some security protocols tailored to operating systems or other software featuring automatic wear leveling recommend conducting a free-space wipe of a given drive and then copying many small, easily identifiable "junk" files or files containing other nonsensitive data to fill as much of that drive as possible, leaving only the amount of free space necessary for satisfactory operation of system hardware and software. As storage and/or system demands grow, the "junk data" files can be deleted as necessary to free up space; even if the deletion of "junk data" files is not secure, their initial nonsensitivity reduces to near zero the consequences of recovery of data remanent from them.[citation needed]

Optical media

As optical media are not magnetic, they are not erased by conventional degaussing. Write-once optical media (CD-R, DVD-R, etc.) also cannot be purged by overwriting. Read/write optical media, such as CD-RW and DVD-RW, may be receptive to overwriting. Methods for successfully sanitizing optical discs include delaminating or abrading the metallic data layer, shredding, incinerating, destructive electrical arcing (as by exposure to microwave energy), and submersion in a polycarbonate solvent (e.g., acetone).

Data on solid-state drives

Research[9] from the Center for Magnetic Recording and Research, University of California, San Diego has uncovered problems inherent in erasing data stored on solid-state drives (SSDs). Researchers discovered three problems with file storage on SSDs:

First, built-in commands are effective, but manufacturers sometimes implement them incorrectly. Second, overwriting the entire visible address space of an SSD twice is usually, but not always, sufficient to sanitize the drive. Third, none of the existing hard drive-oriented techniques for individual file sanitization are effective on SSDs.[9]: 1 

Solid-state drives, which are flash-based, differ from hard-disk drives in two ways: first, in the way data is stored; and second, in the way the algorithms are used to manage and access that data. These differences can be exploited to recover previously erased data. SSDs maintain a layer of indirection between the logical addresses used by computer systems to access data and the internal addresses that identify physical storage. This layer of indirection hides idiosyncratic media interfaces and enhances SSD performance, reliability, and lifespan (see wear leveling); but it can also produce copies of the data that are invisible to the user and that a sophisticated attacker could recover. For sanitizing entire disks, sanitize commands built into the SSD hardware have been found to be effective when implemented correctly, and software-only techniques for sanitizing entire disks have been found to work most, but not all, of the time.[9]: section 5  In testing, none of the software techniques were effective for sanitizing individual files. These included well-known algorithms such as the Gutmann method, US DoD 5220.22-M, RCMP TSSIT OPS-II, Schneier 7 Pass, and Mac OS X Secure Erase Trash.[9]: section 5 

The TRIM feature in many SSD devices, if properly implemented, will eventually erase data after it is deleted, but the process can take some time, typically several minutes. Many older operating systems do not support this feature, and not all combinations of drives and operating systems work.[10]

Data in RAM

Data remanence has been observed in static random-access memory (SRAM), which is typically considered volatile (i.e., the contents degrade with loss of external power). In one study, data retention was observed even at room temperature.[11]

Data remanence has also been observed in dynamic random-access memory (DRAM). Modern DRAM chips have a built-in self-refresh module, as they not only require a power supply to retain data, but must also be periodically refreshed to prevent their data contents from fading away from the capacitors in their integrated circuits. A study found data remanence in DRAM with data retention of seconds to minutes at room temperature and "a full week without refresh when cooled with liquid nitrogen."[12] The study authors were able to use a cold boot attack to recover cryptographic keys for several popular full disk encryption systems, including Microsoft BitLocker, Apple FileVault, dm-crypt for Linux, and TrueCrypt.[12]: 12 

Despite some memory degradation, authors of the above described study were able to take advantage of redundancy in the way keys are stored after they have been expanded for efficient use, such as in key scheduling. The authors recommend that computers be powered down, rather than be left in a "sleep" state, when not in physical control of the owner. In some cases, such as certain modes of the software program BitLocker, the authors recommend that a boot password or a key on a removable USB device be used.[12]: 12  TRESOR is a kernel patch for Linux specifically intended to prevent cold boot attacks on RAM by ensuring encryption keys are neither user accessible nor stored in RAM.

Standards

Australia
  • ASD ISM 2014, Australian Government Information Security Manual, 2014 [13]
Canada
New Zealand
  • GCSB NZISM 2016, New Zealand Information Security Manual v2.5, July 2016 [16]
  • NZSIS PSM 2009, Protective Security Manual
United Kingdom
United States
  • NIST Special Publication 800-88, Guidelines for Media Sanitization, September 2006 [1]
  • DoD 5220.22-M, National Industrial Security Program Operating Manual (NISPOM), February 2006 [18]
    • Current editions no longer contain any references to specific sanitization methods. Standards for sanitization are left up to the Cognizant Security Authority.[18]
    • Although the NISPOM text itself never described any specific methods for sanitization, past editions (1995 and 1997)[19] did contain explicit sanitization methods within the Defense Security Service (DSS) Clearing and Sanitization Matrix inserted after Section 8-306. The DSS still provides this matrix and it continues to specify methods.[4] As of the Nov 2007 edition of the matrix, overwriting is no longer acceptable for sanitization of magnetic media. Only degaussing (with an NSA approved degausser) or physical destruction is acceptable.
  • Army AR380-19, Information Systems Security, February 1998 [20] replaced by AR 25-2 http://www.apd.army.mil/pdffiles/r25_2.pdf (Army Publishing Directorate, 2009)
  • Air Force AFSSI 8580, Remanence Security, 17 November 2008[21]
  • Navy NAVSO P5239-26, Remanence Security, September 1993 [22]

See also

References

  1. ^ a b "Special Publication 800-88: Guidelines for Media Sanitization Rev. 1" (PDF). NIST. 6 September 2012. Retrieved 2014-06-23. (542 KB)
  2. ^ Peter Gutmann (July 1996). "Secure Deletion of Data from Magnetic and Solid-State Memory". Retrieved 2007-12-10. {{cite journal}}: Cite journal requires |journal= (help)
  3. ^ Daniel Feenberg. "Can Intelligence Agencies Recover Overwritten Data?". Retrieved 2007-12-10. {{cite journal}}: Cite journal requires |journal= (help)
  4. ^ a b "DSS Clearing & Sanitization Matrix" (PDF). DSS. 2007-06-28. Retrieved 2010-11-04.
  5. ^ "Special Publication 800-88: Guidelines for Media Sanitization" (PDF). NIST. September 2006. Retrieved 2007-12-08.
  6. ^ Wright, Craig; Kleiman, Dave; Shyaam, Sundhar R.S. (December 2008). "Overwriting Hard Drive Data: The Great Wiping Controversy". Lecture Notes in Computer Science. Springer Berlin / Heidelberg: 243–257. doi:10.1007/978-3-540-89862-7_21. ISBN 978-3-540-89861-0.
  7. ^ "Media Destruction Guidance". NSA. Retrieved 2009-03-01.
  8. ^ Trusted Computing Group (2010). "10 Reasons to Buy Self-Encrypting Drives" (PDF). Trusted Computing Group. Retrieved 2013-04-30.
  9. ^ a b c d Michael Wei; Laura M. Grupp; Frederick E. Spada; Steven Swanson (February 2011). "Reliably Erasing Data From Flash-Based Solid State Drives" (PDF). {{cite journal}}: Cite journal requires |journal= (help)
  10. ^ "Digital Evidence Extraction Software for Computer Forensic Investigations". Forensic.belkasoft.com. October 2012. Retrieved 2014-04-01.
  11. ^ Sergei Skorobogatov (June 2002). "Low temperature data remanence in static RAM". University of Cambridge, Computer Laboratory. {{cite journal}}: Cite journal requires |journal= (help)
  12. ^ a b c J. Alex Halderman; et al. (July 2008). "Lest We Remember: Cold Boot Attacks on Encryption Keys" (PDF). {{cite journal}}: Cite journal requires |journal= (help)
  13. ^ "Australia Government Information Security Manual" (PDF). Australian Signals Directorate. 2014.
  14. ^ "IT Media Overwrite and Secure Erase Products" (PDF). Royal Canadian Mounted Police. May 2009.
  15. ^ "Clearing and Declassifying Electronic Data Storage Devices" (PDF). Communications Security Establishment. July 2006.
  16. ^ "New Zealand Information Security Manual v2.5" (PDF). Government Communications Security Bureau. July 2016.
  17. ^ http://www.adisa.org.uk
  18. ^ a b "National Industrial Security Program Operating Manual" (PDF). DSS. February 2006. Retrieved 2010-09-22.
  19. ^ "Obsolete NISPOM" (PDF). January 1995. Retrieved 2007-12-07. with the Defense Security Service (DSS) Clearing and Sanitization Matrix; includes Change 1, July 31, 1997.
  20. ^ "Information Systems Security" (PDF). February 1998.
  21. ^ AFI 33-106
  22. ^ "Remanence Security Guidebook". September 1993.

Further reading