User:Walklooker/draft for `trusted computing'
||This is not a Wikipedia article: It is an individual user's work in progress page, and may be incomplete and/or unreliable.
For guidance on developing this draft, see Wikipedia:So you made a userspace draft. This draft was last edited two years ago .
Introduction to Trusted Computing is
Trusted Computing originated as a computer architecture intended to help protect data in commercial computers from software and physical attack. The term now encompasses commercial methods of computer security, storage security and network security. Trusted Computing products such as PCs, servers, self encrypting Hard Disk Drives, computer network equipment, and infrastructure are currently produced for commercial use. The Trusted Computing computer architecture can prevent access to data on the wrong computer, prevent access to data unless the correct authorisation is provided, and prevent access to data by inappropriate software. It can be used to prevent access to data in lost or stolen computers, protect data from attack by root kits, and help protect data in normal use by hypervisors, Operating Systems and applications. Trusted Computing is nevertheless controversial, not least because data is protected on behalf of its owner, not on behalf of a computer’s owner. Trusted Computing is specified by the Trusted Computing Group (TCG).
- 1 Overview
- 2 Controversy
- 3 Basic principles of Trusted Computing
- 4 Background
Trusted Computers are a type of trusted system. Trusted Computing technology implements a process of trust. In everyday life, trust in an entity requires (1) unambiguous identification of the entity, plus (2) personal experience indicating that the entity’s behaviour is acceptable (or attestation from a trusted third party indicating that the entity’s behaviour is acceptable), plus (3) assurance that that everything is normal (so that the entity’s behaviour should be normal). Trusted Computing implements (1) identification via cryptographic identities in the form of uncorrelated aliases of the same computer; (2) attestation for the acceptable behaviour of software in the form of digital signatures from trusted third parties (such as manufacturers); (3) evidence of normality in the form of reliable evidence of the computer’s software environment.
Using Trusted Computing
Current Trusted Computers are used primarily to “protect data at rest”, meaning data when a computer is switched off and at the moment when data is retrieved. Trusted Computing provides an alternative to secure computing, where a computer has a particular configuration that is known to protect data. In contrast, Trusted Computing allows a choice of the configuration used to protect data, and could be used to produce computers with a spectrum of security properties lying between ordinary computers and secure computers. Different data can be protected by different configurations, typically trading the degree of protection against the degree of usability and convenience. To approximate a secure computer, Trusted Computing functionality should be used in conjunction with hardware, firmware, hypervisors, Operating Systems and applications that are designed to provide a safe data environment.
How Trusted Computers work
A Trusted Computer protects data by mechanisms that lock cryptographic keys, passwords and other small pieces of data to a description of a particular computer environment, and provides methods for recognising a computer environment. More precisely, a Trusted Computer releases cryptographic keys and passwords to designated computer environments, meaning selected computers executing selected firmware, hypervisors, or OSs, or applications. This helps prevents undesirable computer environments obtaining access to keys that decrypt sensitive data, or obtaining access to keys or passwords that provide the authorisation or authentication needed to access other systems.
Trusted Computing technology
The Trusted Computing initiative has resulted in a new computer chip called the Trusted Platform Module (TPM) and has caused alterations to the firmware that boots a computer. It has prompted alterations to chip sets to isolate one computer program from another on a Personal Computer, and promoted changes to Hard Disk Drives and networking equipment.
Strength of Protection
All TCG specifications are freely available once published and can be inspected by anyone. They are intended to be inspected by the information security community in particular, to search for security weaknesses. No fundamental architectural weaknesses have been identified. The current TPM (version 1.2) uses the hash algorithm SHA-1 that has been shown to have a weakness, but this is not perceived as a crisis for TPMs because of the way that the TPM uses hash algorithms. The components that implement Trusted Computing functionality use full strength standard cryptographic algorithms including RSA. The components are designed to resist all software attacks. The TPM is designed to resist physical attacks unless an adversary has expert levels of insider information and skill, time, access to sophisticated equipment, and access to an individual TPM (of course). The damage that would be done by a successful attack is limited to revealing just the secrets protected by a particular TPM. This is because the Trusted Computing architecture has no shared secrets.
The Trusted Computing architecture is criticized for its brittleness, in that the architecture relies upon cryptographic digests of software to identify the environment in a platform. As a result, even an innocuous change can alter a measured environment and render data inaccessible. This issue has been ameliorated by the development of curtained environments within computers. The method of identifying a computing environment is criticized for its complexity, simply because an OS is a complex and often dynamic data structure. It is believed, however, to be practical to measure software that enforces policies, such as might be implemented by hypervisors and OSs , and software in curtained environments.
Trusted Platform Module
The most well known aspect of Trusted Computing is the TPM chip. Formally, the TPM is just an abstract concept that can be implemented in any way that satisfies the TPM specifications. No `backdoor access to secrets’ is specified in the TPM specifications, and the specifications explicitly forbid the inclusion of a backdoor. The most well known current usage of TPMs is Microsoft’s BitLocker technology. TPMs are also used by the software typically provided by computer OEMs, to help protect data and credentials at rest in their products. The TCG aims to provide a list of TPM products whose functionality has been verified against a test suite and whose security properties have passed examination by a Common Criteria test laboratory. One TPM has been assessed by the British Government. Several hundred million computers with TPMs have been sold but it is estimated that only a small percentage of TPMs are currently actually used, in the absence of trusted hypervisors and OSs. The TCG’s specification for TPMs has been adopted by the International Standards Organisation (ISO) as international standard ISO/IEC11889. The U.S. Army requires that every new small PC it purchases must come with a TPM. As of July 3, 2007, so does virtually the entire Department of Defense.
Trusted Computing Products
Most business-class laptops and some servers incorporate Trusted Computing technology. Some Trusted Computing computer networking equipment and infrastructure is available. Software for using Trusted Computing and to provide a Trusted Computing infrastructure is available from TPM vendors, computer vendors, and other commercial companies. Free software such as Trousers and the Privacy-CA component of a Trusted Computing infrastructure is available for Linux systems. Trusted hypervisors and trusted Operating Systems are still under development.
Trusted Computing is a disruptive technology. The TCG has published a Best Practices document, which describes how the technology should be used responsibly. The concerns appear to be that (1) the architecture introduces a higher threshold (some would say a barrier) for development and deployment of computer systems; (2) the technology could be used to prevent fair usage of data or to lock data to particular applications; (3) Trusted Computing could make it easier to identify computers; or (4) simply that the technology could frustrate data recovery or archive retrieval.
The Trusted Computing architecture acknowledges that data protection depends on the environment in which data is used. This raises the question of what is a safe environment and what is not. A software developer might believe that they have constructed a safe environment, but what does “safe” mean and how can others be convinced? There are, after all, different levels of safety and there is no such thing as absolute safety. Methods such as the Common Criteria or FIPS are typically used to verify that computers or software are safe. Common Criteria and FIPS test laboratories assess whether products are safe to some specified level, and provide certificates for those products. These product examinations typically require considerable investment of skill, time, and money, and constrain development and manufacturing processes. Unfortunately hobbyists and some manufacturers may be incapable of that investment. The investment cannot be avoided, however, if a data owner genuinely needs to know that an environment is safe. In practice, the expectation is that most users of commercial computer systems will be satisfied knowing just that a particular software environment is as its creator intended (not altered by a virus, for example). In that case, all the developer need do is measure and sign his software, so the data owner can recognise both it and its creator.
Fair usage and lock-in
A third party could demand that a computer owner uses a Digital Rights Management system based on Trusted Computing technology to protect the third party’s data on the owner's computer. In that case, the technology would protect the third party’s data from the user. The user would be unable to do anything with the data that wasn’t explicitly permitted by the data owner, even if the user wanted to use the data as permitted by law (under the “fair usage” provision of a copyright law, for example). It’s also theoretically possible that if data was developed using one application, it might never be accessible with a different application because the first application forever locked the data to an environment that contains only the first application. One technological way to avoid these issues would be to allow computer owners complete control over the environments used to protect data on their computers. This has the disadvantages, however, that it is contrary to a security principle called `separation of privileges’, and whatever mechanisms give control to computer owners would be a prime target for attack. The current resolution of this issue is to require data owners to behave responsibly.
Reporting the software environment in computers could provide a new way of tracking computers (if a platform is executing unusual software such as a customised OS, instead of mass market software, for example). More sophisticated forms of measurement could address this issue, by reporting just the properties of software, or reporting just the names of entities that are prepared to vouch for software. Such methods require more study, and may or may not turn out to be practical.
Data recovery and retrieval
It is more complex to retrieve and recover protected data in a Trusted Computing system than unprotected data in an ordinary computer system. (Trusted Computing provides methods to backup and recover protected data, and copy protected data between platforms.) The current resolution of this issue is to rely upon self interest and trust that data owners will not be overzealous in their use of Trusted Computing. There is little advantage in using Trusted Computing to protect data if it is not sensitive.
Basic principles of Trusted Computing
Several books describe the principles of Trusted Computing. One recent book is `Trusted Computing’ (Mitchell (ed)) ISBN 0863415253.
Roots of Trust
A Trusted Computer must have at least three Roots-of-Trust (RoT), meaning computing engines that must operate as intended. If they do not, for whatever reason, Trusted Computing cannot work properly. The three essential RoTs are a Root-of-Trust-for-Measurement (RTM) that starts the process of measuring software in a computer, a Root-of-Trust-for-Storage (RTS) that records the measurements, and a Root-of-Trust-for-Reporting (RTR) that reports the measurements. The RTM varies with type of computer and is usually implemented in a PC as BIOS firmware executing on the computer’s main processor, for example. The RTM and RTS are normally implemented in the TPM.
Platform Configuration Registers
A Trusted Computer must keep a record of the history of a computer from the point after which software that previously executed on the computer cannot affect future software executed on the computer. The process is started by the RTM, which measures parts of a computer that include the next measurement software, records the measurement inside a Platform Configuration Register (PCR) in the TPM, and then passes the measurement task to the next measurement software. Typically the next measurement software measures other parts of a computer that include more measurement software, records the measurement in a PCR in the TPM, and then passes the measurement task to that measurement software, and so on. The PCRs recursively record measurement values using a hash function. This makes it infeasible to compute the value necessary to make an arbitrary existing PCR value change to a desired future value. Hence it is effectively impossible for a rogue to change an existing PCR value into a value that represents measurements of benign software. Therefore the values in PCRs either accurately summarise the software history of a platform or are corrupt. (The computer’s actual software history is recorded in an ordinary audit log, which can be verified using the values in the PCRs.)
Static and Dynamic Environments
There are two types of platform history: the history of the platform from switch-on is recorded using a Static RTM (SRTM) and static PCRs; the history of a curtained environment inside a computer is recorded from the creation of that environment using a Dynamic RTM (DRTM) and dynamic PCRs. The static method has the advantage that it can be tacked on to almost any existing type of computer architecture. The disadvantage of the static method is that static history can keep growing, becoming more and more complex, until the computer is eventually rebooted. The dynamic method has the advantage that the dynamic history can be reset without rebooting the computer, by restarting the curtained environment. The drawback of the dynamic method is that it requires specialist computer chips, such as AMD’s AMD-V and Intel’s TXT products, to create curtained environments.
PCR values are reported to convince a computer’s owner or a third party that the computer is in a desired state. If a computer is to report the values of PCRs, it must be possible to verify that the computer can be trusted to reliably report those PCRs. Trusted Computers can contain a cryptographic encryption key called the Endorsement Key (EK) and a certificate for that key, signed by the manufacturer. These can be used to recognise a genuine Trusted Computer. An EK can be used by the computer’s owner to provide the computer with any quantity of cryptographic signing keys (computer identities) that can sign PCR values, to show that particular PCR values derive from a genuine Trusted Computer while, at the same time, maintaining privacy.
The two methods of obtaining computer identities are (1) via a trusted third party called a Privacy-Certification-Authority (Privacy-CA) and (2) via a zero knowledge proof protocol called Direct Anonymous Attestation (DAA). The two methods can be combined. The Privacy-CA protocol is designed to provide evidence to the Privacy-CA that a signing key belongs to a TPM with a particular EK, but that evidence is (cryptographically) insufficient to convince a third party. This means that the Privacy-CA protocol maintains the status quo, where a CA must use existing methods to convince a third party that a key belongs to a particular computer. The basic Privacy-CA protocol allows the Privacy-CA to choose whether to correlate signing keys belonging to the same computer. This is probably the preferred method of operation when a Privacy-CA and the computers with signing keys belong to the same organisation, to support revocation of signing keys. If the Privacy-CA protocol is augmented with DAA, it is impossible for a Privacy-CA to correlate signing keys. If DAA is used on its own, it can be used to convince any arbitrary party that a particular signing key belongs to a genuine Trusted Computer.
PCR values are also used to ensure that cryptographic keys (and other small pieces of data) are revealed to the intended software environment. This method of protection is called `sealing data’. To achieve this, PCR values are incorporated into encrypted data blobs that can only be decrypted by a TPM. When the TPM decrypts a blob, it compares the PCR values inside the blob with the current PCR values. If they are the same, the TPM allows the data from the blob to be used. Otherwise, the TPM refuses to allow the data to be used.
Trusted Computing was originally developed by computer companies under the auspices of an organisation called the Trusted Computing Platform Alliance, started in 1999. The organisation reincorporated as the Trusted Computing Group in 2003, and now includes a wide range of companies with different levels of membership. The TCG is a not-for-profit pre-competitive organisation whose promoter members include AMD, Fujitsu, HP, IBM, Infineon, Intel, Lenovo, Microsoft, Sun, and Wave. The TCG specifications and resources cover a wide range, from the TPM chip itself to the TCG Software Stack (a software interface to the TPM), PCs, Servers, Mobile Phones, trusted computing infrastructure, storage (encrypting Hard Disk Drives and optical drives), and networking equipment, for example. There are no TCG specifications for hypervisors or OSs or applications. The TCG bylaws require all TCG members to licence (to other TCG members) any intellectual property necessary to implement TCG specifications, under Reasonable And Non Discriminatory (RAND) terms. A report on Intellectual Property relating to Trusted Computing was assembled by members of the EU’s Open-TC project.