From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

k-anonymity is a property possessed by certain anonymized data. The concept of k-anonymity was first introduced by Latanya Sweeney and Pierangela Samarati in a paper published in 1998[1] as an attempt to solve the problem: "Given person-specific field-structured data, produce a release of the data with scientific guarantees that the individuals who are the subjects of the data cannot be re-identified while the data remain practically useful."[2][3][4] A release of data is said to have the k-anonymity property if the information for each person contained in the release cannot be distinguished from at least individuals whose information also appear in the release.

k-anonymity received widespread media coverage in 2018 when British computer scientist Junade Ali used the property alongside cryptographic hashing to create a communication protocol to anonymously verify if a password was leaked without disclosing the searched password.[5][6] This protocol was implemented as a public API in Troy Hunt's Have I Been Pwned? service and is consumed by multiple services including password managers[7][8] and browser extensions.[9][10] This approach was later replicated by Google's Password Checkup feature.[11][12][13]

Methods for k-anonymization[edit]

In the context of k-anonymization problems, a database is a table with n rows and m columns. Each row of the table represents a record relating to a specific member of a population and the entries in the various rows need not be unique. The values in the various columns are the values of attributes associated with the members of the population. The following table is a nonanonymized database consisting of the patient records of some fictitious hospital in Kochi.

Name Age Gender State of domicile Religion Disease
Ramsha 30 Female Tamil Nadu Hindu Cancer
Yadu 24 Female Kerala Hindu Viral infection
Salima 28 Female Tamil Nadu Muslim Tuberculosis
Sunny 27 Male Karnataka Parsi No illness
Joan 24 Female Kerala Christian Heart-related
Bahuksana 23 Male Karnataka Buddhist Tuberculosis
Rambha 19 Male Kerala Hindu Cancer
Kishor 29 Male Karnataka Hindu Heart-related
Johnson 17 Male Kerala Christian Heart-related
John 19 Male Kerala Christian Viral infection

There are 6 attributes and 10 records in this data. There are two common methods for achieving k-anonymity for some value of k.

  1. Suppression: In this method, certain values of the attributes are replaced by an asterisk '*'. All or some values of a column may be replaced by '*'. In the anonymized table below, we have replaced all the values in the 'Name' attribute and all the values in the 'Religion' attribute with a '*'.
  2. Generalization: In this method, individual values of attributes are replaced with a broader category. For example, the value '19' of the attribute 'Age' may be replaced by ' ≤ 20', the value '23' by '20 < Age ≤ 30' , etc.

The next table shows the anonymized database.

Name Age Gender State of domicile Religion Disease
* 20 < Age ≤ 30 Female Tamil Nadu * Cancer
* 20 < Age ≤ 30 Female Kerala * Viral infection
* 20 < Age ≤ 30 Female Tamil Nadu * Tuberculosis
* 20 < Age ≤ 30 Male Karnataka * No illness
* 20 < Age ≤ 30 Female Kerala * Heart-related
* 20 < Age ≤ 30 Male Karnataka * Tuberculosis
* Age ≤ 20 Male Kerala * Cancer
* 20 < Age ≤ 30 Male Karnataka * Heart-related
* Age ≤ 20 Male Kerala * Heart-related
* Age ≤ 20 Male Kerala * Viral infection

This data has 2-anonymity with respect to the attributes 'Age', 'Gender' and 'State of domicile' since for any combination of these attributes found in any row of the table there are always at least 2 rows with those exact attributes. The attributes available to an adversary are called quasi-identifiers. Each quasi-identifier tuple occurs in at least k records for a dataset with k-anonymity.[14]

Meyerson and Williams (2004) demonstrated that optimal k-anonymity is an NP-hard problem, however heuristic methods such as k-Optimize as given by Bayardo and Agrawal (2005) often yield effective results.[15][16] A practical approximation algorithm that enables solving the k-anonymization problem with an approximation guarantee of was presented by Kenig and Tassa.[17]

Possible attacks[edit]

While k-anonymity is a promising approach to take for group based anonymization given its simplicity and wide array of algorithms that perform it, it is however susceptible to many attacks. When background knowledge is available to an attacker, such attacks become even more effective. Such attacks include:

  • Homogeneity Attack: This attack leverages the case where all the values for a sensitive value within a set of k records are identical. In such cases, even though the data has been k-anonymized, the sensitive value for the set of k records may be exactly predicted.
  • Background Knowledge Attack: This attack leverages an association between one or more quasi-identifier attributes with the sensitive attribute to reduce the set of possible values for the sensitive attribute. For example, Machanavajjhala, Kifer, Gehrke, and Venkitasubramaniam (2007) showed that knowing that heart attacks occur at a reduced rate in Japanese patients could be used to narrow the range of values for a sensitive attribute of a patient's disease.


Because k-anonymization does not include any randomization, attackers can still make inferences about data sets that may harm individuals. For example, if the 19-year-old John from Kerala is known to be in the database above, then it can be reliably said that he has either cancer, a heart-related disease, or a viral infection.

K-anonymization is not a good method to anonymize high-dimensional datasets.[18] For example, researchers showed that, given 4 locations, the unicity of mobile phone timestamp-location datasets (, k-anonymity when ) can be as high as 95%.[19]

It has also been shown that k-anonymity can skew the results of a data set if it disproportionately suppresses and generalizes data points with unrepresentative characteristics.[20] The suppression and generalization algorithms used to k-anonymize datasets can be altered, however, so that they do not have such a skewing effect.[21]

Hash-based k-Anonymity[edit]

Hash-based k-Anonymity has been largely developed by Junade Ali, initially for preventing Compromised Credential Checking[22][23][24] and later for real-time anonymization of MAC addresses.[25]

This approach works by taking a cryptographic hash of one dimensional data and truncating the hash such that there are at least hash collisions. This approach allows for efficient anonymized searching of large datasets, such as breached passwords.[26] This approach can further be used to provide a formally demonstrable level of anonymity to privacy-sensitive data, allowing a precise trade-off to be made between information leakage and functionality (such as for MAC address anonymization).[27][28]

See also[edit]


  1. ^ Samarati, Pierangela; Sweeney, Latanya (1998). "Protecting privacy when disclosing information: k-anonymity and its enforcement through generalization and suppression" (PDF). Harvard Data Privacy Lab. Retrieved April 12, 2017.
  2. ^ P. Samarati. Protecting Respondents' Identities in Microdata Release. IEEE Transactions on Knowledge and Data Engineering archive Volume 13 Issue 6, November 2001.
  3. ^ L. Sweeney. "Database Security: k-anonymity". Retrieved 19 January 2014.
  4. ^ L. Sweeney. k-anonymity: a model for protecting privacy. International Journal on Uncertainty, Fuzziness and Knowledge-based Systems, 10 卌, 2002; 557-570.
  5. ^ "Find out if your password has been pwned—without sending it to a server". Ars Technica. Retrieved 2018-05-24.
  6. ^ "1Password bolts on a 'pwned password' check – TechCrunch". techcrunch.com. Retrieved 2018-05-24.
  7. ^ "1Password Integrates With 'Pwned Passwords' to Check if Your Passwords Have Been Leaked Online". Retrieved 2018-05-24.
  8. ^ Conger, Kate. "1Password Helps You Find Out if Your Password Is Pwned". Gizmodo. Retrieved 2018-05-24.
  9. ^ Condon, Stephanie. "Okta offers free multi-factor authentication with new product, One App | ZDNet". ZDNet. Retrieved 2018-05-24.
  10. ^ Coren, Michael J. "The world's biggest database of hacked passwords is now a Chrome extension that checks yours automatically". Quartz. Retrieved 2018-05-24.
  11. ^ Wagenseil I, Paul (5 February 2019). "Google's New Chrome Extension Finds Your Hacked Passwords". www.laptopmag.com.
  12. ^ "Google Launches Password Checkup Extension to Alert Users of Data Breaches". BleepingComputer.
  13. ^ Dsouza, Melisha (6 February 2019). "Google's new Chrome extension 'Password CheckUp' checks if your username or password has been exposed to a third party breach". Packt Hub.
  14. ^ Narayanan, Arvind; Shmatikov, Vitaly. "Robust De-anonymization of Large Sparse Datasets" (PDF).
  15. ^ Roberto J. Bayardo; Rakesh Agrawal (2005). Data Privacy through Optimal k-anonymization (PDF). ICDE '05 Proceedings of the 21st International Conference on Data Engineering. pp. 217–28. doi:10.1109/ICDE.2005.42. ISBN 978-0-7695-2285-2. ISSN 1084-4627. S2CID 17044848. Data de-identification reconciles the demand for release of data for research purposes and the demand for privacy from individuals. This paper proposes and evaluates an optimization algorithm for the powerful de-identification procedure known as k-anonymization. A k-anonymized dataset has the property that each record is indistinguishable from at least k - 1 others. Even simple restrictions of optimized k-anonymity are NP-hard, leading to significant computational challenges. We present a new approach to exploring the space of possible anonymizations that tames the combinatorics of the problem, and develop data-management strategies to reduce reliance on expensive operations such as sorting. Through experiments on real census data, we show the resulting algorithm can find optimal k-anonymizations under two representative cost measures and a wide range of k. We also show that the algorithm can produce good anonymizations in circumstances where the input data or input parameters preclude finding an optimal solution in reasonable time. Finally, we use the algorithm to explore the effects of different coding approaches and problem variations on anonymization quality and performance. To our knowledge, this is the first result demonstrating optimal k-anonymization of a nontrivial dataset under a general model of the problem.
  16. ^ Adam Meyerson; Ryan Williams (2004). On the Complexity of Optimal K-Anonymity (PDF). PODS '04 Proceedings of the Twenty-Third ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems. New York, NY: ACM. pp. 223–8. doi:10.1145/1055558.1055591. ISBN 978-1581138580. S2CID 6798963. The technique of k-anonymization has been proposed in the literature as an alternative way to release public information, while ensuring both data privacy and data integrity. We prove that two general versions of optimal k-anonymization of relations are NP-hard, including the suppression version which amounts to choosing a minimum number of entries to delete from the relation. We also present a polynomial time algorithm for optimal k-anonymity that achieves an approximation ratio independent of the size of the database, when k is constant. In particular, it is a O(k log k)-approximation where the constant in the big-O is no more than 4. However, the runtime of the algorithm is exponential in k. A slightly more clever algorithm removes this condition, but is a O(k logm)-approximation, where m is the degree of the relation. We believe this algorithm could potentially be quite fast in practice.
  17. ^ Kenig, Batya; Tassa, Tamir (2012). "A practical approximation algorithm for optimal k-anonymity". Data Mining and Knowledge Discovery. 25: 134–168. doi:10.1007/s10618-011-0235-9. S2CID 14158546.
  18. ^ Aggarwal, Charu C. (2005). "On k-Anonymity and the Curse of Dimensionality". VLDB '05 – Proceedings of the 31st International Conference on Very large Data Bases. Trondheim, Norway. CiteSeerX ISBN 1-59593-154-6.
  19. ^ de Montjoye, Yves-Alexandre; César A. Hidalgo; Michel Verleysen; Vincent D. Blondel (March 25, 2013). "Unique in the Crowd: The privacy bounds of human mobility" (PDF). Scientific Reports. 3: 1376. Bibcode:2013NatSR...3E1376D. doi:10.1038/srep01376. PMC 3607247. PMID 23524645.
  20. ^ Angiuli, Olivia; Joe Blitzstein; Jim Waldo. "How to De-Identify Your Data". ACM Queue. ACM.
  21. ^ Angiuli, Olivia; Jim Waldo (June 2016). "Statistical Tradeoffs between Generalization and Suppression in the De-Identification of Large-Scale Data Sets". IEEE Computer Society Intl Conference on Computers, Software, and Applications: 589–593. doi:10.1109/COMPSAC.2016.198. ISBN 978-1-4673-8845-0. S2CID 17716908.
  22. ^ Li, Lucy; Pal, Bijeeta; Ali, Junade; Sullivan, Nick; Chatterjee, Rahul; Ristenpart, Thomas (4 September 2019). "Protocols for Checking Compromised Credentials". arXiv:1905.13737 [cs.CR].
  23. ^ "Find out if your password has been pwned—without sending it to a server". Ars Technica. Retrieved 2018-05-24.
  24. ^ "1Password bolts on a 'pwned password' check – TechCrunch". techcrunch.com. Retrieved 2018-05-24.
  25. ^ Ali, Junade; Dyo, Vladimir (2020). "Practical Hash-based Anonymity for MAC Addresses". 17th International Conference on Security and Cryptography (SECRYPT 2020): 572–579. arXiv:2005.06580. doi:10.5220/0009825105720579. ISBN 978-989-758-446-6. S2CID 218629946.
  26. ^ Thomas, Kurt; Pullman, Jennifer; Yeo, Kevin; Raghunathan, Ananth; Kelley, Patrick Gage; Invernizzi, Luca; Benko, Borbala; Pietraszek, Tadek; Patel, Sarvar; Boneh, Dan; Bursztein, Elie (2019). Protecting accounts from credential stuffing with password breach alerting. pp. 1556–1571. ISBN 9781939133069. Retrieved 22 May 2020.
  27. ^ Ali, Junade; Dyo, Vladimir (2020). "Practical Hash-based Anonymity for MAC Addresses". 17th International Conference on Security and Cryptography (SECRYPT 2020): 572–579. arXiv:2005.06580. doi:10.5220/0009825105720579. ISBN 978-989-758-446-6. S2CID 218629946.
  28. ^ Demir, Levent; Kumar, Amrit; Cunche, Mathieu; Lauradoux, Cédric (2018). "The Pitfalls of Hashing for Privacy". Communications Surveys and Tutorials, IEEE Communications Society. 20 (1): 551. doi:10.1109/COMST.2017.2747598. S2CID 3571244. Retrieved 22 May 2020.