Data anonymization

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Data anonymization is a type of information sanitization whose intent is privacy protection. It is the process of removing personally identifiable information from data sets, so that the people whom the data describe remain anonymous.


Data anonymization has been defined as a "process by which personal data is altered in such a way that a data subject can no longer be identified directly or indirectly, either by the data controller alone or in collaboration with any other party."[1] Data anonymization may enable the transfer of information across a boundary, such as between two departments within an agency or between two agencies, while reducing the risk of unintended disclosure, and in certain environments in a manner that enables evaluation and analytics post-anonymization.

In the context of medical data, anonymized data refers to data from which the patient cannot be identified by the recipient of the information. The name, address, and full postcode must be removed, together with any other information which, in conjunction with other data held by or disclosed to the recipient, could identify the patient.[2]

There will always be a risk that anonymized data may not stay anonymous over time. Pairing the anonymized dataset with other data, clever techniques and raw power are some of the ways previously anonymous data sets have become de-anonymized; The data subjects are no longer anonymous.

De-anonymization is the reverse process in which anonymous data is cross-referenced with other data sources to re-identify the anonymous data source.[3] Generalization and perturbation are the two popular anonymization approaches for relational data.[4] The process of obscuring data with the ability to re-identify it later is also called pseudonymization and is one-way companies can store data in a way that is HIPAA compliant.

There are five types of data anonymization operations: generalization, suppression, anatomization, permutation, and perturbation.[5]

GDPR requirements[edit]

The European Union's new General Data Protection Regulation (GDPR) demands that stored data on people in the EU undergo either anonymization or a pseudonymization process. GDPR Recital (26) establishes a very high bar for what constitutes anonymous data, thereby exempting the data from the requirements of the GDPR, namely “…information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable.” The European Data Protection Supervisor (EDPS) and the Spanish Agencia Española de Protección de Datos (AEPD) have issued joint guidance related to requirements for anonymity and exemption from GDPR requirements. According to the EDPS and AEPD no one, including the data controller, should be able to re-identify data subjects in a properly anonymized dataset. [6] Research by data scientists[7] at Imperial College in London and UCLouvain in Belgium, as well as a ruling by Judge Michal Agmon-Gonen of the Tel Aviv District Court,[8] highlight the shortcomings of "Anonymisation" in today's big data world. Anonymisation reflects an outdated approach to data protection[9] that was developed when the processing of data was limited to isolated (siloed) applications prior to the popularity of “big data” processing involving the widespread sharing and combining of data.

See also[edit]


  1. ^ ISO 25237:2017 Health informatics -- Pseudonymization. ISO. 2017. p. 7.
  2. ^ "Data anonymization". The Free Medical Dictionary. Retrieved 17 January 2014.
  3. ^ "De-anonymization". Retrieved 17 January 2014.
  4. ^ Bin Zhou; Jian Pei; WoShun Luk (December 2008). "A brief survey on anonymization techniques for privacy preserving publishing of social network data" (PDF). Newsletter ACM SIGKDD Explorations Newsletter. 10 (2): 12–22. doi:10.1145/1540276.1540279. S2CID 609178.
  5. ^ Eyupoglu, Can; Aydin, Muhammed; Zaim, Abdul; Sertbas, Ahmet (2018-05-17). "An Efficient Big Data Anonymization Algorithm Based on Chaos and Perturbation Techniques". Entropy. 20 (5): 373. doi:10.3390/e20050373. ISSN 1099-4300. PMC 7512893. PMID 33265463. CC-BY icon.svg Text was copied from this source, which is available under a Creative Commons Attribution 4.0 International License.
  7. ^ "Your Data Were 'Anonymized'? These Scientists Can Still Identify You".
  8. ^ "Attm (TA) 28857-06-17 Nursing Companies Association v. Ministry of Defense".
  9. ^ "Data is up for grabs under outdated Israeli privacy law, think tank says".

Further reading[edit]

  • Raghunathan, Balaji (June 2013). The Complete Book of Data Anonymization: From Planning to Implementation. CRC Press. ISBN 9781482218565.
  • Khaled El Emam, Luk Arbuckle (August 2014). Anonymizing Health Data: Case Studies and Methods to Get You Started. O'Reilly Media. ISBN 978-1-4493-6307-9.
  • Rolf H. Weber, Ulrike I. Heinrich (2012). Anonymization: SpringerBriefs in Cybersecurity. Springer. ISBN 9781447140665.
  • Aris Gkoulalas-Divanis, Grigorios Loukides (2012). Anonymization of Electronic Medical Records to Support Clinical Analysis (SpringerBriefs in Electrical and Computer Engineering). Springer. ISBN 9781461456674.
  • Pete Warden. "Why you can't really anonymize your data". O'Reilly Media, Inc. Archived from the original on 9 January 2014. Retrieved 17 January 2014.

External links[edit]