Backdoor (computing)

From Wikipedia, the free encyclopedia
Jump to: navigation, search

A backdoor in a computer system (or cryptosystem or algorithm) is a method of bypassing normal authentication, securing unauthorized remote access to a computer, obtaining access to plaintext, and so on, while attempting to remain undetected. The backdoor may take the form of an installed program (e.g., Back Orifice) or may subvert the system through a rootkit[1]

Default passwords can function as backdoors if they are not changed by the user. Some debugging features can also act as backdoors if they are not removed in the release version.[2]


The threat of backdoors surfaced when multiuser and networked operating systems became widely adopted. Petersen and Turn discussed computer subversion in a paper published in the proceedings of the 1967 AFIPS Conference.[3] They noted a class of active infiltration attacks that use "trapdoor" entry points into the system to bypass security facilities and permit direct access to data. The use of the word trapdoor here clearly coincides with more recent definitions of a backdoor. However, since the advent of public key cryptography the term trapdoor has acquired a different meaning. More generally, such security breaches were discussed at length in a RAND Corporation task force report published under ARPA sponsorship by J.P. Anderson and D.J. Edwards in 1970.[4]

A backdoor in a login system might take the form of a hard coded user and password combination which gives access to the system. A famous example of this sort of backdoor was used as a plot device in the 1983 film WarGames, in which the architect of the "WOPR" computer system had inserted a hardcoded password (his dead son's name) which gave the user access to the system, and to undocumented parts of the system (in particular, a video game-like simulation mode and direct interaction with the artificial intelligence).

An attempt to plant a backdoor in the Linux kernel, exposed in November 2003, showed how subtle such a code change can be.[5] In this case, a two-line change appeared to be a typographical error, but actually gave the caller to the sys_wait4 function root access to the system.[6]

Although the number of backdoors in systems using proprietary software (software whose source code is not publicly available) is not widely credited, they are nevertheless frequently exposed. Programmers have even succeeded in secretly installing large amounts of benign code as Easter eggs in programs, although such cases may involve official forbearance, if not actual permission.

It is also possible to create a backdoor without modifying the source code of a program, or even modifying it after compilation. This can be done by rewriting the compiler so that it recognizes code during compilation that triggers inclusion of a backdoor in the compiled output. When the compromised compiler finds such code, it compiles it as normal, but also inserts a backdoor (perhaps a password recognition routine). So, when the user provides that input, he gains access to some (likely undocumented) aspect of program operation. This attack was first outlined by Ken Thompson in his famous paper Reflections on Trusting Trust (see below).

Many computer worms, such as Sobig and Mydoom, install a backdoor on the affected computer (generally a PC on broadband running Microsoft Windows and Microsoft Outlook). Such backdoors appear to be installed so that spammers can send junk e-mail from the infected machines. Others, such as the Sony/BMG rootkit distributed silently on millions of music CDs through late 2005, are intended as DRM measures—and, in that case, as data gathering agents, since both surreptitious programs they installed routinely contacted central servers.

A traditional backdoor is a symmetric backdoor: anyone that finds the backdoor can in turn use it. The notion of an asymmetric backdoor was introduced by Adam Young and Moti Yung in the Proceedings of Advances in Cryptology: Crypto '96. An asymmetric backdoor can only be used by the attacker who plants it, even if the full implementation of the backdoor becomes public (e.g., via publishing, being discovered and disclosed by reverse engineering, etc.). Also, it is computationally intractable to detect the presence of an asymmetric backdoor under black-box queries. This class of attacks have been termed kleptography; they can be carried out in software, hardware (for example, smartcards), or a combination of the two. The theory of asymmetric backdoors is part of a larger field now called cryptovirology. Notably, NSA inserted a kleptographic backdoor into the Dual_EC_DRBG standard.[7]

There exists an experimental asymmetric backdoor in RSA key generation. This OpenSSL RSA backdoor was designed by Young and Yung, utilizes a twisted pair of elliptic curves, and has been made available.[8]

In January 2014, a backdoor was discovered in certain Samsung Android products, like the Galaxy devices. The Samsung proprietary Android versions are fitted with a backdoor that provides remote access to the data stored on the device. In particular, the Samsung Android software that is in charge of handling the communications with the modem, using the Samsung IPC protocol, implements a class of requests known as remote file server (RFS) commands, that allows the backdoor operator to perform via modem remote I/O operations on the device hard disk or other storage. As the modem is running Samsung proprietary Android software, it is likely that it offers over-the-air remote control that could then be used to issue the RFS commands and thus to access the file system on the device.[9]

List of known backdoors in standards[edit]

  • The MD5 hash was shown to have several weaknesses in 1996 by Hans Dobbertin. These weaknesses allow an attacker to substitute his own item for an MD5-signed original.[10] Malicious code is thus introduceable onto a system.
  • Turner and Chen in RFC6149 wrote that "MD2 must not be used for digital signatures" because it could be falsified.[11][12] Malicious code is thus introduceable onto a system
  • Turner and Chen in RFC 6150 wrote that MD4 "must not be used to hash a cryptographic key of 80 bits or longer." and MD4 attacks are practicable.[13] Malicious code is thus introduceable onto a system.
  • SHA-0 (aka FIPS-180) was withdrawn after CRYPTO '98[14]
  • SHA-1 (aka FIPS-180-1) was shown to be attackable in 2005 by Eli Biham and co-authors, as well as Vincent Rijmen and Elisabeth Oswald[14]
  • The Dual_EC_DRBG cryptographically secure pseudorandom number generator was revealed in 2013 to possibly have a kleptographic (asymmetric) backdoor deliberately inserted by NSA (see Kleptography), who also had the private key to the backdoor.[1][15]

Reflections on Trusting Trust[edit]

Ken Thompson's Reflections on Trusting Trust, his Turing Award acceptance speech in 1984, was the first major paper to describe black box backdoor issues, and points out that trust is relative.[16] It describes a backdoor mechanism based on the fact that people only review source (human-written) code, and not compiled machine code. A program called a compiler is used to create the second from the first, and the compiler is usually trusted to do an honest job.

Thompson's paper describes a modified version of the Unix C compiler that would:

  • Put an invisible backdoor in the Unix login command when it noticed that the login program was being compiled, and as a twist
  • Also add this feature undetectably to future compiler versions upon their compilation as well.

Because the compiler itself was a compiled program, users would be extremely unlikely to notice the machine code instructions that performed these tasks. (Because of the second task, the compiler's source code would appear "clean".) What's worse, in Thompson's proof of concept implementation, the subverted compiler also subverted the analysis program (the disassembler), so that anyone who examined the binaries in the usual way would not actually see the real code that was running, but something else instead. This version was, officially, never released into the wild. It is believed, however, that a version was distributed to BBN and at least one use of the backdoor was recorded.[17]

This attack was recently (August 2009) discovered by Sophos labs: The W32/Induc-A virus infected the program compiler for Delphi, a Windows programming language. The virus introduced its own code to the compilation of new Delphi programs, allowing it to infect and propagate to many systems, without the knowledge of the software programmer. An attack that propagates by building its own Trojan horse can be especially hard to discover. It is believed that the Induc-A virus had been propagating for at least a year before it was discovered.[18]

Once a system has been compromised with a backdoor or Trojan horse, such as the Trusting Trust compiler, it is very hard for the "rightful" user to regain control of the system. However, several practical weaknesses in the Trusting Trust scheme have been suggested. For example, a sufficiently motivated user could painstakingly review the machine code of the untrusted compiler before using it. As mentioned above, there are ways to hide the Trojan horse, such as subverting the disassembler; but there are ways to counter that defense, too, such as writing your own disassembler from scratch. A generic method to counter trusting trust attacks is called Diverse Double-Compiling (DDC). The method requires a different compiler and the source code of the compiler-under-test. That source, compiled with both compilers, results in two different stage-1 compilers; the same source compiled with both stage-1 compilers must then result in two identical stage-2 compilers. A formal proof is given that the latter comparison guarantees that the purported source code and executable of the compiler-under-test correspond, under some assumptions. This method was applied by its author to verify that the C compiler of the GCC suite (v. 3.0.4) contained no trojan, using icc (v. 11.0) as the different compiler.[19] Such verifications are generally difficult and impractical. If a user had a serious concern that the compiler was compromised, they would be better off avoiding using it altogether, along with all the executables compiled with it, rather than carrying out a thorough verification of the binary. A user that did not have serious concerns that the compiler was compromised could not be practically expected to undertake the vast amount of work required.


  1. ^ a b "How a Crypto ‘Backdoor’ Pitted the Tech World Against the NSA" (Zetter) 24 Sep 2013
  2. ^
  3. ^ H.E. Petersen, R. Turn. "System Implications of Information Privacy". Proceedings of the AFIPS Spring Joint Computer Conference, vol. 30, pages 291–300. AFIPS Press: 1967.
  4. ^ Security Controls for Computer Systems, Technical Report R-609, WH Ware, ed, Feb 1970, RAND Corp.
  5. ^ Larry McVoy (November 5, 2003) Linux-Kernel Archive: Re: BK2CVS problem.
  6. ^ Thwarted Linux backdoor hints at smarter hacks; Kevin Poulsen; SecurityFocus, 6 November 2003.
  7. ^ G+M: "The strange connection between the NSA and an Ontario tech firm" 20 Jan 2014
  8. ^ page on OpenSSL RSA backdoor
  9. ^ "Samsung Galaxy Back-door" 28 Jan 2014
  10. ^ German Security Information Agency: "Cryptanalysis of MD5 Compress" (Dobbertin) 2 May 1996
  11. ^ Turner & Chen: "RFC 6149 - MD2 to Historic Status" March 2011
  12. ^ Gary Kessler, "An Overview of Cryptography", sec 3.3
  13. ^ Turner & Chen: "RFC 6150 - MD4 to Historic Status" March 2011
  14. ^ a b [Biham et al, LNCS3494 pp.36-57: "Advances in Cryptology - EUROCRYPT 2005: 24th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Aarhus, Denmark, May 22-26, 2005"]
  15. ^ "N.S.A. Able to Foil Basic Safeguards of Privacy on Web" (Perlroth et al) 5 Sep 2013
  16. ^ Thompson, Ken (August 1984), Reflections on Trusting Trust, Communications of the ACM 27 (8): 761–763, doi:10.1145/358198.358210 
  17. ^ Jargon File entry for “backdoor” at, describes Thompson compiler hack
  18. ^ Compile-a-virus — W32/Induc-A Sophos labs on the discovery of the Induc-A virus
  19. ^ David A. Wheeler (7 December 2009). "Fully Countering Trusting Trust through Diverse Double-Compiling". Retrieved 19 December 2013. 

External links[edit]