A threat can be either "intentional" (i.e. hacking: an individual cracker or a criminal organization) or "accidental" (e.g. the possibility of a computer malfunctioning, or the possibility of a natural disaster such as an earthquake, a fire, or a tornado) or otherwise a circumstance, capability, action, or event.
- 1 Definitions
- 2 Phenomenology
- 3 Threats classification
- 4 Threat model
- 5 Threat classification
- 6 Associated terms
- 7 Threat management
- 8 Threat hunting
- 9 See also
- 10 References
- 11 External links
- A potential cause of an incident, that may result in harm of systems and organization
A more comprehensive definition, tied to an Information assurance point of view, can be found in "Federal Information Processing Standards (FIPS) 200, Minimum Security Requirements for Federal Information and Information Systems" by NIST of United States of America
- Any circumstance or event with the potential to adversely impact organizational operations (including mission, functions, image, or reputation), organizational assets, or individuals through an information system via unauthorized access, destruction, disclosure, modification of information, and/or denial of service. Also, the potential for a threat-source to successfully exploit a particular information system vulnerability.
National Information Assurance Glossary defines threat as:
- Any circumstance or event with the potential to adversely impact an IS through unauthorized access, destruction, disclosure, modification of data, and/or denial of service.
- Any circumstance or event with the potential to adversely impact an asset [G.3] through unauthorized access, destruction, disclosure, modification of data, and/or denial of service.
- Anything that is capable of acting in a manner resulting in harm to an asset and/or organization; for example, acts of God (weather, geological events,etc.); malicious actors; errors; failures.
- threats are anything (e.g., object, substance, human, etc.) that are capable of acting against an asset in a manner that can result in harm. A tornado is a threat, as is a flood, as is a hacker. The key consideration is that threats apply the force (water, wind, exploit code, etc.) against an asset that can cause a loss event to occur.
National Information Assurance Training and Education Center gives a more articulated definition of threat:
- The means through which the ability or intent of a threat agent to adversely affect an automated system, facility, or operation can be manifest. Categorize and classify threats as follows: Categories Classes Human Intentional Unintentional Environmental Natural Fabricated 2. Any circumstance or event with the potential to cause harm to a system in the form of destruction, disclosure, modification or data, and/or denial of service. 3. Any circumstance or event with the potential to cause harm to the ADP system or activity in the form of destruction, disclosure, and modification of data, or denial of service. A threat is a potential for harm. The presence of a threat does not mean that it will necessarily cause actual harm. Threats exist because of the very existence of the system or activity and not because of any specific weakness. For example, the threat of fire exists at all facilities regardless of the amount of fire protection available. 4. Types of computer systems related adverse events (i. e. , perils) that may result in losses. Examples are flooding, sabotage and fraud. 5. An assertion primarily concerning entities of the external environment (agents); we say that an agent (or class of agents) poses a threat to one or more assets; we write: T(e;i) where: e is an external entity; i is an internal entity or an empty set. 6. An undesirable occurrence that might be anticipated but is not the result of a conscious act or decision. In threat analysis, a threat is defined as an ordered pair, <peril; asset category>, suggesting the nature of these occurrences but not the details (details are specific to events). 7. A potential violation of security. 8. A set of properties of a specific external entity (which may be either an individual or class of entities) that, in union with a set of properties of a specific internal entity, implies a risk (according to some body of knowledge).
The term "threat" relates to some other basic security terms as shown in the following diagram:
+ - - - - - - - - - - - - + + - - - - + + - - - - - - - - - - -+ | An Attack: | |Counter- | | A System Resource: | | i.e., A Threat Action | | measure | | Target of the Attack | | +----------+ | | | | +-----------------+ | | | Attacker |<==================||<========= | | | | i.e., | Passive | | | | | Vulnerability | | | | A Threat |<=================>||<========> | | | | Agent | or Active | | | | +-------|||-------+ | | +----------+ Attack | | | | VVV | | | | | | Threat Consequences | + - - - - - - - - - - - - + + - - - - + + - - - - - - - - - - -+
A resource (both physical or logical) can have one or more vulnerabilities that can be exploited by a threat agent in a threat action. The result can potentially compromise the confidentiality, integrity or availability properties of resources (potentially different than the vulnerable one) of the organization and others involved parties (customers, suppliers).
The so-called CIA triad is the basis of information security.
The attack can be active when it attempts to alter system resources or affect their operation: so it compromises Integrity or Availability. A "passive attack" attempts to learn or make use of information from the system but does not affect system resources: so it compromises Confidentiality.
OWASP (see figure) depicts the same phenomenon in slightly different terms: a threat agent through an attack vector exploits a weakness (vulnerability) of the system and the related security controls causing a technical impact on an IT resource (asset) connected to a business impact.
A set of policies concerned with information security management, the Information security management systems (ISMS), has been developed to manage, according to risk management principles, the countermeasures in order to accomplish to a security strategy set up following rules and regulations applicable in a country. Countermeasures are also called security controls; when applied to the transmission of information are named security services.
The widespread of computer dependencies and the consequent raising of the consequence of a successful attack, led to a new term cyberwarfare.
It should be noted that nowadays the many real attacks exploit Psychology at least as much as technology. Phishing and Pretexting and other methods are called social engineering techniques. The Web 2.0 applications, specifically Social network services, can be a mean to get in touch with people in charge of system administration or even system security, inducing them to reveal sensitive information. One famous case is Robin Sage.
The most widespread documentation on computer insecurity is about technical threats such as a computer virus, trojan and other malware, but a serious study to apply cost effective countermeasures can only be conducted following a rigorous IT risk analysis in the framework of an ISMS: a pure technical approach will let out the psychological attacks, that are increasing threats.
Threats can be classified according to their type and origin:
- Types of threats:
- Physical damage: fire, water, pollution
- Natural events: climatic, seismic, volcanic
- Loss of essential services: electrical power, air conditioning, telecommunication
- Compromise of information: eavesdropping, theft of media, retrieval of discarded materials
- Technical failures: equipment, software, capacity saturation,
- Compromise of functions: error in use, abuse of rights, denial of actions
Note that a threat type can have multiple origins.
- Deliberate: aiming at information asset
- illegal processing of data
- equipment failure
- software failure
- natural event
- loss of power supply
- Negligence: Known but neglected factors, compromising the network safety and sustainability
People can be interested in studying all possible threats that can:
- affect an asset
- affect a software system
- be brought by a threat agent
- Spoofing of user identity
- Information disclosure (privacy breach or Data leak)
- Denial of Service (D.o.S.)
- Elevation of privilege
Microsoft previously rated the risk of security threats using five categories in a classification called DREAD: Risk assessment model. The model is considered obsolete by Microsoft. The categories were:
- Damage – how bad would an attack be?
- Reproducibility – how easy it is to reproduce the attack?
- Exploitability – how much work is it to launch the attack?
- Affected users – how many people will be impacted?
- Discoverability – how easy it is to discover the threat?
The DREAD name comes from the initials of the five categories listed.
The spread over a network of threats can lead to dangerous situations. In military and civil fields, threat level as been defined: for example INFOCON is a threat level used by the US. Leading antivirus software vendors publish global threat level on their websites.
Threat agents or actors
The term Threat Agent is used to indicate an individual or group that can manifest a threat. It is fundamental to identify who would want to exploit the assets of a company, and how they might use them against the company.
Individuals within a threat population; Practically anyone and anything can, under the right circumstances, be a threat agent – the well-intentioned, but inept, computer operator who trashes a daily batch job by typing the wrong command, the regulator performing an audit, or the squirrel that chews through a data cable.
Threat agents can take one or more of the following actions against an asset:
- Access – simple unauthorized access
- Misuse – unauthorized use of assets (e.g., identity theft, setting up a porn distribution service on a compromised server, etc.)
- Disclose – the threat agent illicitly discloses sensitive information
- Modify – unauthorized changes to an asset
- Deny access – includes destruction, theft of a non-data asset, etc.
It’s important to recognize that each of these actions affects different assets differently, which drives the degree and nature of loss. For example, the potential for productivity loss resulting from a destroyed or stolen asset depends upon how critical that asset is to the organization’s productivity. If a critical asset is simply illicitly accessed, there is no direct productivity loss. Similarly, the destruction of a highly sensitive asset that doesn’t play a critical role in productivity won’t directly result in a significant productivity loss. Yet that same asset, if disclosed, can result in significant loss of competitive advantage or reputation, and generate legal costs. The point is that it’s the combination of the asset and type of action against the asset that determines the fundamental nature and degree of loss. Which action(s) a threat agent takes will be driven primarily by that agent’s motive (e.g., financial gain, revenge, recreation, etc.) and the nature of the asset. For example, a threat agent bent on financial gain is less likely to destroy a critical server than they are to steal an easily pawned asset like a laptop.
It is important to separate the concept of the event that a threat agent get in contact with the asset (even virtually, i.e. through the network) and the event that a threat agent act against the asset.
OWASP collects a list of potential threat agents in order to prevent system designers and programmers insert vulnerabilities in the software.
Threat Agent = Capabilities + Intentions + Past Activities
These individuals and groups can be classified as follows:
- Non-Target Specific: Non-Target Specific Threat Agents are computer viruses, worms, trojans and logic bombs.
- Employees: Staff, contractors, operational/maintenance personnel, or security guards who are annoyed with the company.
- Organized Crime and Criminals: Criminals target information that is of value to them, such as bank accounts, credit cards or intellectual property that can be converted into money. Criminals will often make use of insiders to help them.
- Corporations: Corporations are engaged in offensive information warfare or competitive intelligence. Partners and competitors come under this category.
- Human, Unintentional: Accidents, carelessness.
- Human, Intentional: Insider, outsider.
- Natural: Flood, fire, lightning, meteor, earthquakes.
Threat sources are those who wish a compromise to occur. It is a term used to distinguish them from threat agents/actors who are those who actually carry out the attack and who may be commissioned or persuaded by the threat source to knowingly or unknowingly carry out the attack.
- Threat communities
- Subsets of the overall threat agent population that share key characteristics. The notion of threat communities is a powerful tool for understanding who and what we’re up against as we try to manage risk. For example, the probability that an organization would be subject to an attack from the terrorist threat community would depend in large part on the characteristics of your organization relative to the motives, intents, and capabilities of the terrorists. Is the organization closely affiliated with ideology that conflicts with known, active terrorist groups? Does the organization represent a high profile, high impact target? Is the organization a soft target? How does the organization compare with other potential targets? If the organization were to come under attack, what components of the organization would be likely targets? For example, how likely is it that terrorists would target the company information or systems?
- The following threat communities are examples of the human malicious threat landscape many organizations face:
- Contractors (and vendors)
- Cyber-criminals (professional hackers)
- Non-professional hackers
- Nation-state intelligence services (e.g., counterparts to the CIA, etc.)
- Malware (virus/worm/etc.) authors
Various kinds of threat actions are defined as subentries under "threat consequence".
Threat consequence is a security violation that results from a threat action.
Includes disclosure, deception, disruption, and usurpation.
The following subentries describe four kinds of threat consequences, and also list and describe the kinds of threat actions that cause each consequence. Threat actions that are accidental events are marked by "*".
- "Unauthorized disclosure" (a threat consequence)
- A circumstance or event whereby an entity gains access to data for which the entity is not authorized. (See: data confidentiality.). The following threat actions can cause unauthorized disclosure:
- A threat action whereby sensitive data is directly released to an unauthorized entity. This includes:
- "Deliberate Exposure"
- Intentional release of sensitive data to an unauthorized entity.
- Searching through data residue in a system to gain unauthorized knowledge of sensitive data.
- * "Human error"
- Human action or inaction that unintentionally results in an entity gaining unauthorized knowledge of sensitive data.
- * "Hardware/software error"
- System failure that results in an entity gaining unauthorized knowledge of sensitive data.
- A threat action whereby an unauthorized entity directly accesses sensitive data travelling between authorized sources and destinations. This includes:
- Gaining access to sensitive data by stealing a shipment of a physical medium, such as a magnetic tape or disk, that holds the data.
- "Wiretapping (passive)"
- Monitoring and recording data that is flowing between two points in a communication system. (See: wiretapping.)
- "Emanations analysis"
- Gaining direct knowledge of communicated data by monitoring and resolving a signal that is emitted by a system and that contains the data but is not intended to communicate the data. (See: Emanation.)
- A threat action whereby an unauthorized entity indirectly accesses sensitive data (but not necessarily the data contained in the communication) by reasoning from characteristics or byproducts of communications. This includes:
- "Traffic analysis"
- Gaining knowledge of data by observing the characteristics of communications that carry the data.
- "Signals analysis"
- Gaining indirect knowledge of communicated data by monitoring and analyzing a signal that is emitted by a system and that contains the data but is not intended to communicate the data. (See: Emanation.)
- A threat action whereby an unauthorized entity gains access to sensitive data by circumventing a system's security protections. This includes:
- Gaining unauthorized physical access to sensitive data by circumventing a system's protections.
- Gaining unauthorized logical access to sensitive data by circumventing a system's protections.
- "Reverse engineering"
- Acquiring sensitive data by disassembling and analyzing the design of a system component.
- Transforming encrypted data into plain text without having prior knowledge of encryption parameters or processes.
- "Deception" (a threat consequence)
- A circumstance or event that may result in an authorized entity receiving false data and believing it to be true. The following threat actions can cause deception:
- A threat action whereby an unauthorized entity gains access to a system or performs a malicious act by posing as an authorized entity.
- Attempt by an unauthorized entity to gain access to a system by posing as an authorized user.
- "Malicious logic"
- In context of masquerade, any hardware, firmware, or software (e.g., Trojan horse) that appears to perform a useful or desirable function, but actually gains unauthorized access to system resources or tricks a user into executing other malicious logic.
- A threat action whereby false data deceives an authorized entity. (See: active wiretapping.)
- A threat action whereby an entity deceives another by falsely denying responsibility for an act.
- "False denial of origin"
- Action whereby the originator of data denies responsibility for its generation.
- "False denial of receipt"
- Action whereby the recipient of data denies receiving and possessing the data.
- "Disruption" (a threat consequence)
- A circumstance or event that interrupts or prevents the correct operation of system services and functions. (See: denial of service.) The following threat actions can cause disruption:
- A threat action that prevents or interrupts system operation by disabling a system component.
- "Malicious logic"
- In context of incapacitation, any hardware, firmware, or software (e.g., logic bomb) intentionally introduced into a system to destroy system functions or resources.
- "Physical destruction"
- Deliberate destruction of a system component to interrupt or prevent system operation.
- * "Human error"
- Action or inaction that unintentionally disables a system component.
- * "Hardware or software error"
- Error that causes failure of a system component and leads to disruption of system operation.
- * "Natural disaster"
- Any natural disaster (e.g., fire, flood, earthquake, lightning, or wind) that disables a system component.
- A threat action that undesirably alters system operation by adversely modifying system functions or data.
- In context of corruption, deliberate alteration of a system's logic, data, or control information to interrupt or prevent correct operation of system functions.
- "Malicious logic"
- In context of corruption, any hardware, firmware, or software (e.g., a computer virus) intentionally introduced into a system to modify system functions or data.
- * "Human error"
- Human action or inaction that unintentionally results in the alteration of system functions or data.
- * "Hardware or software error"
- Error that results in the alteration of system functions or data.
- * "Natural disaster"
- Any natural event (e.g. power surge caused by lightning) that alters system functions or data.
- A threat action that interrupts delivery of system services by hindering system operations.
- "Usurpation" (a threat consequence)
- A circumstance or event that results in control of system services or functions by an unauthorized entity. The following threat actions can cause usurpation:
- A threat action whereby an entity assumes unauthorized logical or physical control of a system resource.
- "Theft of service"
- Unauthorized use of service by an entity.
- "Theft of functionality"
- Unauthorized acquisition of actual hardware, software, or firmware of a system component.
- "Theft of data"
- Unauthorized acquisition and use of data.
- A threat action that causes a system component to perform a function or service that is detrimental to system security.
- In context of misuse, deliberate alteration of a system's logic, data, or control information to cause the system to perform unauthorized functions or services.
- "Malicious logic"
- In context of misuse, any hardware, software, or firmware intentionally introduced into a system to perform or control execution of an unauthorized function or service.
- "Violation of permissions"
- Action by an entity that exceeds the entity's system privileges by executing an unauthorized function.
Threat landscape or environment
Threats should be managed by operating an ISMS, performing all the IT risk management activities foreseen by laws, standards and methodologies.
Very large organizations tend to adopt business continuity management plans in order to protect, maintain and recover business-critical processes and systems. Some of these plans foreseen to set up computer security incident response team (CSIRT) or computer emergency response team (CERT)
There are some kind of verification of the threat management process:
Most organizations perform a subset of these steps, adopting countermeasures based on a non systematic approach: computer insecurity studies the battlefield of computer security exploits and defences that results.
Information security awareness is a significant market (see category:Computer security companies). There has been a lot of software developed to deal with IT threats, including both open-source software (see category:free security software) and proprietary software (see category:computer security software companies for a partial list).
Cyber Threat Management
Threat management involves a wide variety of threats including physical threats like flood and fire. While ISMS risk assessment process does incorporate threat management for cyber threats such as remote buffer overflows the risk assessment process doesn't include processes such as threat intelligence management or response procedures.
Cyber Threat Management (CTM) is emerging as best practice for managing cyber threats beyond basic risk assessment found in ISMS. It enables early identification of threats, data driven situational awareness, accurate decision-making, and timely threat mitigating actions.
- Manual and automated intelligence gathering and threat analytics
- Comprehensive methodology for real-time monitoring including advanced techniques such as behavioral modeling
- Use of advanced analytics to optimize intelligence, generate security intelligence, and provide Situational Awareness
- Technology and skilled people leveraging situational awareness to enable rapid decisions and automated or manual actions
Cyber threat hunting is "the process of proactively and iteratively searching through networks to detect and isolate advanced threats that evade existing security solutions." This is in contrast to traditional threat management measures, such as firewalls intrusion detection systems, and SIEMs, which typically involve an investigation after there has been a warning of a potential threat or an incident has occurred.
Threat hunting can be a manual process, in which a security analyst sifts through various data information using their own knowledge and familiarity with the network to create hypotheses about potential threats. To be even more effective and efficient, however, threat hunting can be partially automated, or machine-assisted, as well. In this case, the analyst utilizes a software that harnesses machine learning and user and entity behavior analytics (UEBA) to inform the analyst of potential risks. The analyst then investigates these potential risks, tracking suspicious behavior in the network. Thus hunting is an iterative process, meaning that it must be continuously carried out in a loop, beginning with a hypothesis. There are three types of hypotheses:
- Analytics-Driven: "Machine-learning and UEBA, used to develop aggregated risk scores that can also serve as hunting hypotheses"
- Situational-Awareness Driven: "Crown Jewel analysis, enterprise risk assessments, company- or employee-level trends"
- Intelligence-Driven: "Threat intelligence reports, threat intelligence feeds, malware analysis, vulnerability scans"
The analyst researches their hypothesis by going through vast amounts of data about the network. The results are then stored so that they can be used to improve the automated portion of the detection system and to serve as a foundation for future hypotheses.
The SANS Institute has conducted research and surveys on the effectiveness of threat hunting to track and disrupt cyber adversaries as early in their process as possible. According to a survey released in 2016, "adopters of this model reported positive results, with 74 percent citing reduced attack surfaces, 59 percent experiencing faster speed and accuracy of responses, and 52 percent finding previously undetected threats in their networks."
- Internet Engineering Task Force RFC 2828 Internet Security Glossary
- ISO/IEC, "Information technology -- Security techniques-Information security risk management" ISO/IEC FIDIS 27005:2008
- "Federal Information Processing Standards (FIPS) 200, Minimum Security Requirements for Federal Information and Information Systems" (PDF). Carc.nist.gov. Retrieved 2013-11-05.
- "Glossary — ENISA". Enisa.europa.eu. 2009-07-24. Retrieved 2013-11-05.
- Technical Standard Risk Taxonomy ISBN 1-931624-77-1 Document Number: C081 Published by The Open Group, January 2009.
- "An Introduction to Factor Analysis of Information Risk (FAIR)" (PDF). Riskmanagementinsight.com. November 2006. Archived from the original (PDF) on 18 November 2014. Retrieved 5 November 2013.
- Schou, Corey (1996). Handbook of INFOSEC Terms, Version 2.0. CD-ROM (Idaho State University & Information Systems Security Organization)
- "Glossary of Terms". Niatec.info. 2011-12-12. Retrieved 2012-02-13.
- Wright, Joe; Jim Harmening (2009). "15". In Vacca, John. Computer and Information Security Handbook. Morgan Kaufmann Publications. Elsevier Inc. p. 257. ISBN 978-0-12-374354-1.
- "ISACA THE RISK IT FRAMEWORK" (PDF). Isaca.org. Retrieved 2013-11-05. (registration required)
- Security engineering:a guide to building dependable distributed systems, second edition, Ross Anderson, Wiley, 2008 - 1040 pages ISBN 978-0-470-06852-6, Chapter 2, page 17
- Brian Prince (2009-04-07). "Using Facebook to Social Engineer Your Way Around Security". Eweek.com. Retrieved 2013-11-05.
- "Social engineering via Social networking". Networkworld.com. Retrieved 2012-02-13.
- "The STRIDE Threat Model". msdn.microsoft.com. Retrieved 2017-03-28.
- "McAfee Threat Intelligence | McAfee, Inc". Mcafee.com. Retrieved 2012-02-13.
- "Threatcon - Symantec Corp". Symantec.com. 2012-01-10. Retrieved 2012-02-13.
- "Category:Threat Agent". OWASP. 2011-12-09. Retrieved 2012-02-13.
- HMG IA Standard No. 1 Technical Risk Assessment
- "FIPS PUB 31 FEDERAL INFORMATION PROCESSING STANDARDS PUBLICATION : JUNE 1974" (PDF). Tricare.mil. Retrieved 2013-11-05.[permanent dead link]
- ENISA Threat Landscape and Good Practice Guide for Smart Home and Converged Media (Dec. 1, 2014)
- ENISA Threat Landscape 2013–Overview of Current and Emerging Cyber-Threats (Dec. 11, 2013)
- "What is Cyber Threat Management". www.ioctm.org. Retrieved 2015-01-28.
- "Cyber threat hunting: How this vulnerability detection strategy gives analysts an edge - TechRepublic". TechRepublic. Retrieved 2016-06-07.
- "Cyber Threat Hunting - Sqrrl". Sqrrl. Retrieved 2016-06-07.
- "Threat hunting technique helps fend off cyber attacks". BetaNews. 2016-04-14. Retrieved 2016-06-07.