Jump to content

Trustworthy computing

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 161.11.121.245 (talk) at 13:56, 1 August 2008. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Template:Totally-disputed

The term Trustworthy Computing (TwC) has been applied to computing systems that are inherently secure, available and reliable. The Committee on Information Systems Trustworthiness’ publication, Trust in Cyberspace, defines such a system as one which

does what people expect it to do – and not something else – despite environmental disruption, human user and operator errors, and attacks by hostile parties. Design and implementation errors must be avoided, eliminated or somehow tolerated. It is not sufficient to address only some of these dimensions, nor is it sufficient simply to assemble components are themselves trustworthy. Trustworthiness is holistic and multidimensional.

More recently, Microsoft has adopted the term Trustworthy Computing as the title of a company initiative to improve public trust in its own commercial offerings. In large part, it is intended to address the concerns about the security and reliability of previous Microsoft Windows releases and, in part, to address general concerns about privacy and business practices. This initiative has changed the focus of many of Microsoft’s internal development efforts, but has been greeted with skepticism by some in the computer industry.

"Trusted" vs. "Trustworthy"

The terms Trustworthy Computing and Trusted Computing had distinct meanings. A given system can be trustworthy but not trusted and vice versa.[1]

The National Security Agency (NSA) defines a trusted system or component as one "whose failure can break the security policy", and a trustworthy system or component as one "that will not fail". Trusted Computing has been defined and outlined with a set of specifications and guidelines by the Trusted Computing Platform Alliance (TCPA). These include: secure input and output, memory curtaining, sealed storage, and remote attestation. As stated above, Trustworthy Computing aims to build consumer confidence in computers, by making them more reliable, and thus more widely used and accepted.

History

Trustworthy computing is not a new concept. The 1960s saw an increasing dependence on computing systems by the military, the space program, financial institutions and public safety organizations. The computing industry began to identify deficiencies in existing systems and focus on areas that would address public concerns about reliance on automated systems.

In 1967, Allen-Babcock Computing identified four areas of trustworthiness that foreshadow Microsoft’s. Their time-share business allowed multiple users from multiple businesses to coexist on the same computer, presenting many of the same vulnerabilities of modern networked information systems.

Allen-Babcock’s strategy for providing trustworthy computing concentrated on four areas:

  1. An ironclad operating system [reliability]
  2. Use of trustworthy personnel [~business integrity]
  3. Effective access control [security]
  4. User requested optional privacy [privacy]

A benchmark event occurred in 1989, when 53 government and industry organizations met. This workshop assessed the challenges involved in developing trustworthy critical computer systems and recommended the use of formal methods as a solution. Among the issues addressed was the need for improved software testing methods that would guarantee high level of reliability on initial software release. The attendees further recommended programmer certification as a means to guarantee the quality and integrity of software.

In 1996, the National Research Council recognized that the rise of the Internet simultaneously increased societal reliance on computer systems while increasing the vulnerability of such systems to failure. The Committee on Information System Trustworthiness was convened; producing the work, Trust in Cyberspace. This report reviews the benefits of trustworthy systems, the cost of un-trustworthy systems and identifies actions required for improvement. In particular, operator errors, physical disruptions, design errors, and malicious software as items to be mitigated or eliminated. It also identifies encrypted authorization, fine level access control and proactive monitoring as essential to a trustworthy system.

Microsoft launched its Trustworthy Computing initiative in 2002. This program was in direct response to Internet devastation caused by the Code Red and Nimda worms in 2001. Announcement of the initiative came in the form of an all-employee email from Microsoft founder Bill Gates redirecting the company’s software development activities to include a “by design” view of security.

Microsoft and Trustworthy Computing

Microsoft CTO and Senior Vice President Craig Mundie authored a whitepaper in 2002, defining the framework of the company’s Trustworthy Computing program . Four areas were identified as the initiative’s key “pillars”. Microsoft has subsequently organized its efforts to align with these goals. These key activities are set forth as:

  1. Security
  2. Privacy
  3. Reliability
  4. Business Integrity

Security

Microsoft’s first pillar of Trustworthy Computing is security. Security has always been a part of computing, but now it must become a priority. According to Microsoft, security goes beyond the technology to include the social aspect as well. This is outlined in the following three components:

  1. Technology Investment – Investing in the expertise and technology necessary to create a secure and trustworthy computing environment.
  2. Responsible Leadership – Microsoft highlights the responsibility that goes with being an industry leader. This includes working with law enforcement agencies, government experts, academia, and private sectors to join forces and create partnerships necessary to create and enforce secure computing.
  3. Customer Guidance and Engagement – It is important to develop trust by educating consumers with training and information on best practices for secure computing.

Privacy

For computing to become ubiquitous in connecting people and transmitting information over various networks and services it is critical that information is protected and kept private. Microsoft has privacy as the second pillar for Trustworthy Computing and commits to making privacy a priority in the design, developing, and testing of their products. To ensure this privacy, it is also important to contribute to standards and policies created by industry organizations and government. Privacy policies must be honored and practiced across the industry.

Another essential element of privacy is providing the user a sense of control over their personal information. This includes ongoing education, information, and notification of policy and procedures. In a world of spam, hackers, and unwanted pop-ups, computer users need to feel empowered with the tools and computing products, especially when it comes to protecting their personal information.

Reliability

Microsoft’s third pillar of Trustworthy Computing is reliability. Microsoft uses a fairly broad definition to encompass all technical aspects related to availability, performance and disruption recovery. It is intended to be a measure not only of whether a system is working, but whether it will continue working in non-optimal situations.

Six key attributes have been defined for a reliable system:

  1. Resilient. The system will continue to provide the user a service in the face of internal or external disruption.
  2. Recoverable. Following a user- or system-induced disruption, the system can be easily restored, through instrumentation and diagnosis, to a previously known state with no data loss.
  3. Controlled. Provides accurate and timely service whenever needed.
  4. Undisruptable. Required changes and upgrades do not disrupt the service being provided by the system.
  5. Production-ready. On release, the system contains minimal software bugs, requiring a limited number of predictable updates.
  6. Predictable. It works as expected or promised, and what worked before works now.

Business Integrity

Microsoft’s fourth pillar of Trustworthy Computing is business integrity. Many view this as a reaction by the technology firm to the accounting scandals of Enron, Worldcom and others, but it also speaks to the concerns regarding software developer integrity and responsiveness.

Microsoft identifies two major areas of concentration for business integrity. These are responsiveness: “The company accepts responsibility for problems, and takes action to correct them. Help is provided to customers in planning for, installing and operating the product”; and transparency: “The company is open in its dealings with customers. Its motives are clear, it keeps its word, and customers know where they stand in a transaction or interaction with the company.”

See also

References

  1. ^ Irvine, Cynthia E., "What Might We Mean by 'Secure Code' and How Might We Teach What We Mean?", Proceedings Workshop on Secure Software Engineering Education and Training, Oaho, HI, April 2006. (PDF)