Consensus audit guidelines

From Wikipedia, the free encyclopedia
Jump to: navigation, search

The Twenty Critical Security Controls for Effective Cyber Defense (commonly called the Consensus Audit Guidelines or CAG) is a publication of best practice guidelines for computer security. The project was initiated early in 2008 as a response to extreme data losses experienced by organizations in the US defense industrial base.[1] The publication can be found on the website of the SANS Institute.


The Consensus Audit Guidelines were compiled by a consortium of more than 100 contributors[2] from US government agencies, commercial forensics experts and pen testers.[3] Authors of the initial draft include members of:

  • US National Security Agency Red Team and Blue Team
  • US Department of Homeland Security, US-CERT
  • US DoD Computer Network Defense Architecture Group
  • US DoD Joint Task Force – Global Network Operations (JTF-GNO)
  • US DoD Defense Cyber Crime Center (DC3)
  • US Department of Energy Los Alamos National Lab, and three other National Labs.
  • US Department of State, Office of the CISO
  • US Air Force
  • US Army Research Laboratory
  • US Department of Transportation, Office of the CIO
  • US Department of Health and Human Services, Office of the CISO
  • US Government Accountability Office (GAO)
  • MITRE Corporation
  • The SANS Institute[1]


The Consensus Audit Guidelines consist of 20 key actions, called security controls, that organizations should take to block or mitigate known attacks. The controls are designed so that primarily automated means can be used to implement, enforce and monitor them.[4] The security controls give no-nonsense, actionable recommendations for cyber security, written in language that’s easily understood by IT personnel.[5] Goals of the Consensus Audit Guidelines include to:

  • Leverage cyber offense to inform cyber defense, focusing on high payoff areas,
  • Ensure that security investments are focused to counter highest threats,
  • Maximize use of automation to enforce security controls, thereby negating human errors, and
  • Use consensus process to collect best ideas.[6]


Version 5.0 was released on February 2, 2014 by the Council on Cyber Security and consists of the following security controls. [7]

  • Critical Control 1: Inventory of Authorized and Unauthorized Devices
  • Critical Control 2: Inventory of Authorized and Unauthorized Software
  • Critical Control 3: Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations, and Servers
  • Critical Control 4: Continuous Vulnerability Assessment and Remediation
  • Critical Control 5: Malware Defenses
  • Critical Control 6: Application Software Security
  • Critical Control 7: Wireless Access Control
  • Critical Control 8: Data Recovery Capability
  • Critical Control 9: Security Skills Assessment and Appropriate Training to Fill Gaps
  • Critical Control 10: Secure Configurations for Network Devices such as Firewalls, Routers, and Switches
  • Critical Control 11: Limitation and Control of Network Ports, Protocols, and Services
  • Critical Control 12: Controlled Use of Administrative Privileges
  • Critical Control 13: Boundary Defense
  • Critical Control 14: Maintenance, Monitoring, and Analysis of Audit Logs
  • Critical Control 15: Controlled Access Based on the Need to Know
  • Critical Control 16: Account Monitoring and Control
  • Critical Control 17: Data Protection
  • Critical Control 18: Incident Response and Management
  • Critical Control 19: Secure Network Engineering
  • Critical Control 20: Penetration Tests and Red Team Exercises

Notable results[edit]

Starting in 2009, the US Department of State began supplementing its risk scoring program in part using the Consensus Audit Guidelines. According to the Department's measurements, in the first year of site scoring using this approach the department reduced overall risk on its key unclassified network by nearly 90 percent in overseas sites, and by 89 percent in domestic sites.[8]


It Costs Money[edit]

US Federal agencies do not have the money to both meet the FISMA requirements and attempt to meet the "Critical Control 20". [1]

Designed to sell product[edit]

May see the controls as a list '"20 Pseudo-Critical Faux Controls for Technology Adoption." It's clear that this list exists to push more product' [2]

Not metric-based[edit]

The controls have no method of measuring success [3]


Recently SANS places additional restrictions on Attribution and NoDerivatives, (Attribution-NoDerivs 3.0 Unported (CC BY-ND 3.0)), although this set of documents was created and supported by the community. In addition, the No Derivatives, restricts developers of security controls for other systems and protocols, to distribute that derivative to further benefit greater security community.


External links[edit]