Jump to content

Threat (computer security)

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Pastore Italy (talk | contribs) at 18:24, 14 November 2010. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In Computer security a threat is a potential for violation of security, which exists when there is a circumstance, capability, action, or event that could breach security and cause harm.
That is, a threat is a possible danger that might exploit a vulnerability. A threat can be either "intentional" (i.e., intelligent; e.g., an individual cracker or a criminal organization) or "accidental" (e.g., the possibility of a computer malfunctioning, or the possibility of an "act of God" such as an earthquake, a fire, or a tornado). The definition is as IETF RFC 2828 [1]

ISO 27005 defines threat as:[2]

A potential cause of an incident, that may result in harm of systems and organization

A more comprehensive definition, tied to an Information assurance point of view, can be found in "Federal Information Processing Standards (FIPS) 200, Minimum Security Requirements for Federal Information and Information Systems" by NIST of United States of America [3]

Any circumstance or event with the potential to adversely impact organizational operations (including mission, functions, image, or reputation), organizational assets, or individuals through an information system via unauthorized access, destruction, disclosure, modification of information, and/or denial of service. Also, the potential for a threat-source to successfully exploit a particular information system vulnerability.

ENISA defines threat[4]:

Any circumstance or event with the potential to adversely impact an asset [G.3] through unauthorized access, destruction, disclosure, modification of data, and/or denial of service.

The Open Group defines threat in [5] as:

Anything that is capable of acting in a manner resulting in harm to an asset and/or organization; for example, acts of God (weather, geological events,etc.); malicious actors; errors; failures.

Factor Analysis of Information Risk defines threat as:[6]

threats are anything (e.g., object, substance, human, etc.) that are capable of acting against an asset in a manner that can result in harm. A tornado is a threat, as is a flood, as is a hacker. The key consideration is that threats apply the force (water, wind, exploit code, etc.) against an asset that can cause a loss event to occur.


Phenomenology

The term "threat" relates to some other basic security terms as shown in the following diagram:[1]

      + - - - - - - - - - - - - +  + - - - - +  + - - - - - - - - - - -+
      | An Attack:              |  |Counter- |  | A System Resource:   |
      | i.e., A Threat Action   |  | measure |  | Target of the Attack |
      | +----------+            |  |         |  | +-----------------+  |
      | | Attacker |<==================||<=========                 |  |
      | |   i.e.,  |   Passive  |  |         |  | |  Vulnerability  |  |
      | | A Threat |<=================>||<========>                 |  |
      | |  Agent   |  or Active |  |         |  | +-------|||-------+  |
      | +----------+   Attack   |  |         |  |         VVV          |
      |                         |  |         |  | Threat Consequences  |
      + - - - - - - - - - - - - +  + - - - - +  + - - - - - - - - - - -+

A resource (both physical or logical) can have one or more vulnerabilities that can be exploited by a threat agent in a threat action. The result can potentially compromises the Confidentiality, Integrity or Availability properties of resources (potentially different that the vulnerable one) of the organization and others involved parties (customers, suppliers).
The so called CIA triad is the basis of Information Security.

The attack can be active when it attempts to alter system resources or affect their operation: so it compromises Integrity or Availability. A "passive attack" attempts to learn or make use of information from the system but does not affect system resources: so it compromises Confidentiality.[1]

A set of policies concerned with information security management, the Information Security Management Systems (ISMS), has been developed to manage, according to Risk management principles, the countermeasures in order to accomplish to a security strategy set up following rules and regulations applicable in a country.[7]

Threats classification

Threats can be classified according to their type and origin:[2]

  • Type
    • Physical damage
      • fire
      • water
      • pollution
    • natural events
      • climatic
      • seismic
      • volcanic
    • loss of essential services
      • electrical power
      • air conditioning
      • telecommunication
    • compromise of information
      • eavesdropping,
      • theft of media
      • retrieval of discarded materials
    • technical failures
      • equipment
      • software
      • capacity saturation
    • compromise of functions
      • error in use
      • abuse of rights
      • denial of actions
  • Origin
    • Deliberate: aiming at information asset
      • spying
      • illegal processing of data
    • accidental
      • equipment failure
      • software failure
    • environmental
      • natural event
      • loss of power supply

Note that a threat type can have multiple origins.


Associated terms

Threat Agents

Threat Agents
Individuals within a threat population; Practically anyone and anything can, under the right circumstances, be a threat agent – the well-intentioned, but inept, computer operator who trashes a daily batch job by typing the wrong command, the regulator performing an audit, or the squirrel that chews through a data cable.[6]

Threat agents can take one or more of the following actions against an asset[6]:

  • Access – simple unauthorized access
  • Misuse – unauthorized use of assets (e.g., identity theft, setting up a porn distribution service on a compromised server, etc.)
  • Disclose – the threat agent illicitly discloses sensitive information
  • Modify – unauthorized changes to an asset
  • Deny access – includes destruction, theft of a non-data asset, etc.

It’s important to recognize that each of these actions affects different assets differently, which drives the degree and nature of loss. For example, the potential for productivity loss resulting from a destroyed or stolen asset depends upon how critical that asset is to the organization’s productivity. If a critical asset is simply illicitly accessed, there is no direct productivity loss. Similarly, the destruction of a highly sensitive asset that doesn’t play a critical role in productivity won’t directly result in a significant productivity loss. Yet that same asset, if disclosed, can result in significant loss of competitive advantage or reputation, and generate legal costs. The point is that it’s the combination of the asset and type of action against the asset that determines the fundamental nature and degree of loss. Which action(s) a threat agent takes will be driven primarily by that agent’s motive (e.g., financial gain, revenge, recreation, etc.) and the nature of the asset. For example, a threat agent bent on financial gain is less likely to destroy a critical server than they are to steal an easily pawned asset like a laptop.

It is important to separate the the concept of the event that a threat agent get in contact with the asset (even virtually, i.e. through the network) and the event that a threat agent act against the asset.

Threat Communities

Threat Communities
Subsets of the overall threat agent population that share key characteristics. The notion of threat communities is a powerful tool for understanding who and what we’re up against as we try to manage risk. For example, the probability that an organization would be subject to an attack from the terrorist threat community would depend in large part on the characteristics of your organization relative to the motives, intents, and capabilities of the terrorists. Is the organization closely affiliated with ideology that conflicts with known, active terrorist groups? Does the organization represent a high profile, high impact target? Is the organization a soft target? How does the organization compare with other potential targets? If the organization were to come under attack, what components of the organization would be likely targets? For example, how likely is it that terrorists would target the company information or systems?[6]
The following threat communities are examples of the human malicious threat landscape many organizations face:
  • Internal
    • Employees
    • Contractors (and vendors)
    • Partners
  • External
    • Cyber-criminals (professional hackers)
    • Spies
    • Non-professional hackers
    • Activists
    • Nation-state intelligence services (e.g., counterparts to the CIA, etc.)
    • Malware (virus/worm/etc.) authors

Threat action

Threat action is an assault on system security.
A complete security architecture deals with both intentional acts (i.e. attacks) and accidental events.[8]
Various kinds of threat actions are defined as subentries under "threat consequence".

Threat analysis

Threat analysis is the analysis of the probability of occurrences and consequences of damaging actions to a system.[1] It is the basis of Risk analysis.

Threat consequence

Threat consequence is a security violation that results from a threat action.[1]
Includes disclosure, deception, disruption, and usurpation.

The following subentries describe four kinds of threat consequences, and also list and describe the kinds of threat actions that cause each consequence.[1] Threat actions that are accidental events are marked by "*".

"(Unauthorized) Disclosure" (a threat consequence)
A circumstance or event whereby an entity gains access to data for which the entity is not authorized. (See: data confidentiality.). The following threat actions can cause unauthorized disclosure:
"Exposure"
A threat action whereby sensitive data is directly released to an unauthorized entity. This includes:
"Deliberate Exposure"
Intentional release of sensitive data to an unauthorized entity.
"Scavenging"
Searching through data residue in a system to gain unauthorized knowledge of sensitive data.
* "Human error"
Human action or inaction that unintentionally results in an entity gaining unauthorized knowledge of sensitive data.
* "Hardware/software error"
System failure that results in an entity gaining unauthorized knowledge of sensitive data.
"Interception"
A threat action whereby an unauthorized entity directly accesses sensitive data travelling between authorized sources and destinations. This includes:
"Theft"
Gaining access to sensitive data by stealing a shipment of a physical medium, such as a magnetic tape or disk, that holds the data.
"Wiretapping (passive)"
Monitoring and recording data that is flowing between two points in a communication system. (See: wiretapping.)
"Emanations analysis"
Gaining direct knowledge of communicated data by monitoring and resolving a signal that is emitted by a system and that contains the data but is not intended to communicate the data. (See: emanation.)
"Inference"
A threat action whereby an unauthorized entity indirectly accesses sensitive data (but not necessarily the data contained in the communication) by reasoning from characteristics or byproducts of communications. This includes:
"Traffic analysis"
Gaining knowledge of data by observing the characteristics of communications that carry the data.
"Signals analysis"
Gaining indirect knowledge of communicated data by monitoring and analyzing a signal that is emitted by a system and that contains the data but is not intended to communicate the data. (See: emanation.)
"Intrusion"
A threat action whereby an unauthorized entity gains access to sensitive data by circumventing a system's security protections. This includes:
"Trespass"
Gaining unauthorized physical access to sensitive data by circumventing a system's protections.
"Penetration"
Gaining unauthorized logical access to sensitive data by circumventing a system's protections.
"Reverse engineering"
Acquiring sensitive data by disassembling and analyzing the design of a system component.
"Cryptanalysis"
Transforming encrypted data into plain text without having prior knowledge of encryption parameters or processes.
"Deception" (a threat consequence)
A circumstance or event that may result in an authorized entity receiving false data and believing it to be true. The following threat actions can cause deception:
"Masquerade"
A threat action whereby an unauthorized entity gains access to a system or performs a malicious act by posing as an authorized entity.
"Spoof"
Attempt by an unauthorized entity to gain access to a system by posing as an authorized user.
"Malicious logic"
In context of masquerade, any hardware, firmware, or software (e.g., Trojan horse) that appears to perform a useful or desirable function, but actually gains unauthorized access to system resources or tricks a user into executing other malicious logic.
"Falsification"
A threat action whereby false data deceives an authorized entity. (See: active wiretapping.)
"Substitution"
Altering or replacing valid data with false data that serves to deceive an authorized entity.
"Insertion"
Introducing false data that serves to deceive an authorized entity.
"Repudiation"
A threat action whereby an entity deceives another by falsely denying responsibility for an act.
"False denial of origin"
Action whereby the originator of data denies responsibility for its generation.
. "False denial of receipt"
Action whereby the recipient of data denies receiving and possessing the data.
"Disruption" (a threat consequence)
A circumstance or event that interrupts or prevents the correct operation of system services and functions. (See: denial of service.) The following threat actions can cause disruption:
"Incapacitation"
A threat action that prevents or interrupts system operation by disabling a system component.
"Malicious logic"
In context of incapacitation, any hardware, firmware, or software (e.g., logic bomb) intentionally introduced into a system to destroy system functions or resources.
"Physical destruction"
Deliberate destruction of a system component to interrupt or prevent system operation.
* "Human error"
Action or inaction that unintentionally disables a system component.
* "Hardware or software error"
Error that causes failure of a system component and leads to disruption of system operation.
* "Natural disaster"
Any "act of God" (e.g., fire, flood, earthquake, lightning, or wind) that disables a system component.[8]
"Corruption"
A threat action that undesirably alters system operation by adversely modifying system functions or data.
"Tamper"
In context of corruption, deliberate alteration of a system's logic, data, or control information to interrupt or prevent correct operation of system functions.
"Malicious logic"
In context of corruption, any hardware, firmware, or software (e.g., a computer virus) intentionally introduced into a system to modify system functions or data.
* "Human error"
Human action or inaction that unintentionally results in the alteration of system functions or data.
* "Hardware or software error"
Error that results in the alteration of system functions or data.
* "Natural disaster"
Any "act of God" (e.g., power surge caused by lightning) that alters system functions or data.[8]
"Obstruction"
A threat action that interrupts delivery of system services by hindering system operations.
"Interference"
Disruption of system operations by blocking communications or user data or control information.
"Overload"
Hindrance of system operation by placing excess burden on the performance capabilities of a system component. (See: flooding.)
"Usurpation" (a threat consequence)
A circumstance or event that results in control of system services or functions by an unauthorized entity. The following threat actions can cause usurpation:
"Misappropriation"
A threat action whereby an entity assumes unauthorized logical or physical control of a system resource.
"Theft of service"
Unauthorized use of service by an entity.
"Theft of functionality"
Unauthorized acquisition of actual hardware, software, or firmware of a system component.
"Theft of data"
Unauthorized acquisition and use of data.
"Misuse"
A threat action that causes a system component to perform a function or service that is detrimental to system security.
"Tamper"
In context of misuse, deliberate alteration of a system's logic, data, or control information to cause the system to perform unauthorized functions or services.
"Malicious logic"
In context of misuse, any hardware, software, or firmware intentionally introduced into a system to perform or control execution of an unauthorized function or service.
"Violation of permissions"
Action by an entity that exceeds the entity's system privileges by executing an unauthorized function.

See also

References

  1. ^ a b c d e f Internet Engineering Task Force RFC 2828 Internet Security Glossary
  2. ^ a b ISO/IEC, "Information technology -- Security tecniques-Information security risk management" ISO/IEC FIDIS 27005:2008
  3. ^ Federal Information Processing Standards (FIPS) 200, Minimum Security Requirements for Federal Information and Information Systems
  4. ^ http://www.enisa.europa.eu/act/rm/cr/risk-management-inventory/glossary#G51 ENISA Glossary threat
  5. ^ Technical Standard Risk Taxonomy ISBN 1-931624-77-1 Document Number: C081 Published by The Open Group, January 2009.
  6. ^ a b c d "An Introduction to Factor Analysis of Information Risk (FAIR)", Risk Management Insight LLC, November 2006;
  7. ^ Wright, Joe (2009). "15". Computer and Information Security Handbook. Morgan Kaufmann Pubblications. Elsevier Inc. p. 257. ISBN 978-0-12-374354-1. {{cite book}}: Cite has empty unknown parameters: |lastn=, |laydate=, |separator=, |laysummary=, |editorn-link=, |nopp=, |chapterurl=, |trans_chapter=, |trans_title=, |month=, |authorn-link=, |authormask=, |lastauthoramp=, and |firstn= (help); Unknown parameter |coauthors= ignored (|author= suggested) (help); Unknown parameter |editorn-first= ignored (help); Unknown parameter |editorn-last= ignored (help)
  8. ^ a b c FIPS PUB 31 FEDERAL INFORMATION PROCESSING STANDARDS PUBLICATION 1974 JUNE