Data classification (data management)

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

In the field of data management, data classification as a part of Information Lifecycle Management (ILM) process can be defined as a tool for categorization of data to enable/help organization to effectively answer following questions:

  • What data types are available?
  • Where are certain data located?
  • What access levels are implemented?
  • What protection level is implemented and does it adhere to compliance regulations?

When implemented it provides a bridge between IT professionals and process or application owners. IT staff is informed about the data value and on the other hand management (usually application owners) understands better to what segment of data centre has to be invested to keep operations running effectively. This can be of particular importance in risk management, legal discovery, and compliance with government regulations. Data classification is typically a manual process; however, there are many tools from different vendors that can help gather information about the data.

Data classification needs to take into account the following:

  • Regulatory requirements (GDPR – of course, HIPPA, BASEL, PIPA, FIPPA, etc.)
  • Strategic or proprietary worth
  • Organization specific policies
  • Ethical and privacy considerations
  • Contractual agreements[1]

How to start process of data classification[edit]

Note that this classification structure is written from a Data Management perspective and therefore has a focus for text and text convertible binary data sources. Images, videos, and audio files are highly structured formats built for industry standard API's and do not readily fit within the classification scheme outlined below.

First step is to evaluate and divide the various applications and data into their respective category as follows:

  • Relational or Tabular data (around 15% of non audio/video data)
    • Generally describes proprietary data which can be accessible only through application or application programming interfaces (API)
    • Applications that produce structured data are usually database applications.
    • This type of data usually brings complex procedures of data evaluation and migration between the storage tiers.
    • To ensure adequate quality standards, the classification process has to be monitored by subject matter experts.
  • Semi-structured or Poly-structured data (all other non audio/video data that does not conform to a system or platform defined Relational or Tabular form).
    • Generally describes data files that have a dynamic or non-relational semantic structure (e.g. documents,XML,JSON,Device or System Log output,Sensor Output).
    • Relatively simple process of data classification is criteria assignment.
    • Simple process of data migration between assigned segments of predefined storage tiers.

Types of data classification - note that this designation is entirely orthogonal to the application centric designation outlined above. Regardless of structure inherited from application, data may be of the types below

1. Geographical : i.e. according to area (supposing the rice production of a state or country etc.)

2. Chronological: i.e. according to time (sale of last 3 months)

3. Qualitative : i.e. according to distinct categories. (E.g.: population on the basis of poor and rich)

4. Quantitative : i.e. according to magnitude(a) discrete and b)continuous

It should also be evaluated across three dimensions:

  1. Identifiability: how easily can this data be used to identify an individual?
  2. Sensitivity: how much damage could be done if this data reached the wrong hands?
  3. Scarcity: how readily available is this data?[2]

Basic criteria for semi-structured or poly-structured data classification[edit]

  • Time criteria are the simplest and most commonly used, where different types of data are evaluated by time of creation, time of access, time of update, etc.
  • Metadata criteria as type, name, owner, location and so on can be used to create more advanced classification policy
  • Content criteria which involve usage of advanced content classification algorithms are most advanced forms of unstructured data classification

Note that any of these criteria may also apply to Tabular or Relational data as "Basic Criteria". These criteria are application specific, rather than inherent aspects of the form in which the data is presented..

Basic criteria for relational or Tabular data classification[edit]

These criteria are usually initiated by application requirements such as:

  • Disaster recovery and Business Continuity rules
  • Data centre resources optimization and consolidation
  • Hardware performance limitations and possible improvements by reorganization

Note that any of these criteria may also apply to semi/poly structured data as "Basic Criteria". These criteria are application specific, rather than inherent aspects of the form in which the data is presented.

Benefits of data classification[edit]

Benefits of effective implementation of appropriate data classification can significantly improve ILM process and save data centre storage resources. If implemented systemically it can generate improvements in data centre performance and utilization. Data classification can also reduce costs and administration overhead. "Good enough" data classification can produce these results:

  • Data compliance and easier risk management. Data are located where expected on predefined storage tier and "point in time"
  • Simplification of data encryption because all data need not be encrypted. This saves valuable processor cycles and all related consecutiveness.
  • Data indexing to improve user access times
  • Data protection is redefined where RTO (Recovery Time Objective) is improved.

See also[edit]


  1. ^ "Get the scoop on data classification and GDPR before you're too late - LightsOnData". LightsOnData. 2018-05-23. Retrieved 2018-05-23.
  2. ^ Khatibloo, Fatemeh (May 2017). "How Dirty Is Your Data? Strategic Plan: The Customer Trust And Privacy Playbook". The Customer Trust And Privacy Playbook For 2018. line feed character in |title= at position 24 (help)
  • Josh Judd and Dan Kruger (2005), Principles of SAN Design. Infinity Publishing