De-identification
De-identification is the process used to prevent someone's personal identity from being revealed. For example, data produced during human subject research might be de-identified to preserve the privacy of research participants.
When applied to metadata or general data about identification, the process is also known as data anonymization. Common strategies include deleting or masking personal identifiers, such as personal name, and suppressing or generalizing quasi-identifiers, such as date of birth. The reverse process of using de-identified data to identify individuals is known as data re-identification. Successful re-identifications[1][2][3][4] cast doubt on de-identification's effectiveness. A systematic review of fourteen distinct re-identification attacks found "a high re-identification rate […] dominated by small-scale studies on data that was not de-identified according to existing standards."[5]
De-identification is adopted as one of the main approaches toward data privacy protection. It is commonly used in fields of communications, multimedia, biometrics, big data, cloud computing, data mining, internet, social networks and audio–video surveillance.[6]
Examples
In designing surveys
A survey is conducted, such as a census, to collect information about a group of people. To encourage participation and to protect the privacy of survey respondents, the researchers attempt to design the survey in a way that when people participate in a survey, it will not be possible to match any participant's individual response(s) with any data published.
Before using information
When an online shopping website wants to know its users' preferences and shopping habits, it decides to retrieve customers' data from its database and do analysis on them. The personal data information include personal identifiers which were collected directly when customers created their accounts. The website needs to pre-handle the data through de-identification techniques before analyzing data records to avoid violating their customers' privacy.
Anonymization
Anonymization refers to irreversibly severing a data set from the identity of the data contributor in a study to prevent any future re-identification, even by the study organizers under any condition.[7][8] De-identification may also include preserving identifying information which can only be re-linked by a trusted party in certain situations.[7][8][9] There is a debate in the technology community on whether data that can be re-linked, even by a trusted party, should ever be considered de-identified.
Techniques
Common strategies of de-identification are masking personal identifiers and generalizing quasi-identifiers. Pseudonymization is the main technique used to mask personal identifiers from data records and k-anonymization is usually adopted for generalizing quasi-identifiers.
Pseudonymization
Pseudonymization is performed by replacing real names with a temporary ID. It deletes or masks personal identifiers to make individuals unidentified. This method makes it possible to track the individual's record over time even though the record will be updated. However, it can not prevent the individual from being identified if some specific combinations of attributes in data record indirectly identify the individual. [10]
k-anonymization
k-anonymization defines attributes that indirectly points to the individual's identity as quasi-identifiers (QIs) and deal with data by making at least k individuals have same combination of QI values.[10] QI values are handled following specific standards. For example, the k-anonymization replaces some original data in the records with new range values and keep some values unchanged. New combination of QI values prevents the individual from being identified and also avoid destroying data records.
Applications
Research into de-identification is driven mostly for protecting health information.[11] Some libraries have adopted methods used in the healthcare industry to preserve their readers' privacy.[11]
In big data, de-identification is widely adopted by individuals and organizations.[6] With the development of social media, e-commerce, and big data, de-identification is sometimes required and often used for data privacy when users' personal data are collected by companies or third-party organizations who will analyze it for their own personal usage.
In smart cities, de-identification may be required to protect the privacy of residents, workers and visitors. Without strict regulation, de-identification may be difficult because sensors can still collect information without consent.[12]
Limits
Whenever a person participates in genetics research, the donation of a biological specimen often results in the creation of a large amount of personalized data. Such data is uniquely difficult to de-identify.[13]
Anonymization of genetic data is particularly difficult because of the huge amount of genotypic information in biospecimens,[13] the ties that specimens often have to medical history,[14] and the advent of modern bioinformatics tools for data mining.[14] There have been demonstrations that data for individuals in aggregate collections of genotypic data sets can be tied to the identities of the specimen donors.[15]
Some researchers have suggested that it is not reasonable to ever promise participants in genetics research that they can retain their anonymity, but instead such participants should be taught the limits of using coded identifiers in a de-identification process.[8]
De-identification laws in the United States of America
In May 2014, the United States President's Council of Advisors on Science and Technology found de-identification "somewhat useful as an added safeguard" but not "a useful basis for policy" as "it is not robust against near‐term future re‐identification methods".[16]
The HIPAA Privacy Rule provides mechanisms for using and disclosing health data responsibly without the need for patient consent. These mechanisms center on two HIPAA de-identification standards – Safe Harbor and the Expert Determination Method. Safe Harbor relies on the removal of specific patient identifiers (e.g. name, phone number, email address, etc.) while the Expert Determination Method requires knowledge and experience with generally accepted statistical and scientific principles and methods to render information not individually identifiable.[17]
Safe harbor
The safe harbor method uses a list approach to de-identification and has two requirements:
- The removal or generalization of 18 elements from the data.
- That the Covered Entity or Business Associate does not have actual knowledge that the residual information in the data could be used alone, or in combination with other information, to identify an individual. Safe Harbor is a highly prescriptive approach to de-identification. Under this method, all dates must be generalized to year and zip codes reduced to three digits. The same approach is used on the data regardless of the context. Even if the information is to be shared with a trusted researcher who wishes to analyze the data for seasonal variations in acute respiratory cases and, thus, requires the month of hospital admission, this information cannot be provided; only the year of admission would be retained.
Expert Determination
Expert Determination takes a risk-based approach to de-identification that applies current standards and best practices from the research to determine the likelihood that a person could be identified from their protected health information. This method requires that a person with appropriate knowledge of and experience with generally accepted statistical and scientific principles and methods render the information not individually identifiable. It requires:
- That the risk is very small that the information could be used alone, or in combination with other reasonably available information, by an anticipated recipient to identify an individual who is a subject of the information;
- Documents the methods and results of the analysis that justify such a determination.
Research on decedents
The key law about research in electronic health record data is HIPAA Privacy Rule. This law allows use of electronic health record of deceased subjects for research (HIPAA Privacy Rule (section 164.512(i)(1)(iii))).[18]
See also
References
- ^ Sweeney, L. (2000). "Simple Demographics Often Identify People Uniquely". Data Privacy Working Paper. 3.
- ^ de Montjoye, Y.-A. (2013). "Unique in the crowd: The privacy bounds of human mobility". Scientific Reports. 3: 1376. Bibcode:2013NatSR...3E1376D. doi:10.1038/srep01376. PMC 3607247. PMID 23524645.
- ^ de Montjoye, Y.-A.; Radaelli, L.; Singh, V. K.; Pentland, A. S. (29 January 2015). "Unique in the shopping mall: On the reidentifiability of credit card metadata". Science. 347 (6221): 536–539. Bibcode:2015Sci...347..536D. doi:10.1126/science.1256297. PMID 25635097.
- ^ Narayanan, A. (2006). "How to break anonymity of the netflix prize dataset". arXiv:cs/0610105.
- ^ El Emam, Khaled (2011). "A Systematic Review of Re-Identification Attacks on Health Data". PLOS ONE. 10 (4): e28071. Bibcode:2011PLoSO...628071E. doi:10.1371/journal.pone.0028071. PMC 3229505. PMID 22164229.
{{cite journal}}
: CS1 maint: unflagged free DOI (link) - ^ a b Ribaric, Slobodan; Ariyaeeinia, Aladdin; Pavesic, Nikola (September 2016). "De-identification for privacy protection in multimedia content: A survey". Signal Processing: Image Communication. 47: 131–151. doi:10.1016/j.image.2016.05.020.
- ^ a b Godard, B. A.; Schmidtke, J. R.; Cassiman, J. J.; Aymé, S. G. N. (2003). "Data storage and DNA banking for biomedical research: Informed consent, confidentiality, quality issues, ownership, return of benefits. A professional perspective". European Journal of Human Genetics. 11: S88–122. doi:10.1038/sj.ejhg.5201114. PMID 14718939.
- ^ a b c Fullerton, S. M.; Anderson, N. R.; Guzauskas, G.; Freeman, D.; Fryer-Edwards, K. (2010). "Meeting the Governance Challenges of Next-Generation Biorepository Research". Science Translational Medicine. 2 (15): 15cm3. doi:10.1126/scitranslmed.3000361. PMC 3038212. PMID 20371468.
- ^ McMurry, AJ; Gilbert, CA; Reis, BY; Chueh, HC; Kohane, IS; Mandl, KD (2007). "A self-scaling, distributed information architecture for public health, research, and clinical care". J Am Med Inform Assoc. 14 (4): 527–33. doi:10.1197/jamia.M2371. PMC 2244902. PMID 17460129.
- ^ a b Ito, Koichi; Kogure, Jun; Shimoyama, Takeshi; Tsuda, Hiroshi (2016). "De-identification and Encryption Technologies to Protect Personal Information" (PDF). Fujitsu Scientific and Technical Journal. 52 (3): 28–36.
- ^ a b Nicholson, S.; Smith, C. A. (2006). "Using lessons from health care to protect the privacy of library users: Guidelines for the de-identification of library data based on HIPAA" (PDF). Proceedings of the American Society for Information Science and Technology. 42: n/a. doi:10.1002/meet.1450420106.
- ^ Coop, Alex. "Sidewalk Labs decision to offload tough decisions on privacy to third party is wrong, says its former consultant". IT World Canada. Retrieved 27 June 2019.
- ^ a b McGuire, A. L.; Gibbs, R. A. (2006). "GENETICS: No Longer De-Identified". Science. 312 (5772): 370–371. doi:10.1126/science.1125339. PMID 16627725.
- ^ a b Thorisson, G. A.; Muilu, J.; Brookes, A. J. (2009). "Genotype–phenotype databases: Challenges and solutions for the post-genomic era". Nature Reviews Genetics. 10 (1): 9–18. doi:10.1038/nrg2483. hdl:2381/4584. PMID 19065136.
- ^ Homer, N.; Szelinger, S.; Redman, M.; Duggan, D.; Tembe, W.; Muehling, J.; Pearson, J. V.; Stephan, D. A.; Nelson, S. F.; Craig, D. W. (2008). Visscher, Peter M. (ed.). "Resolving Individuals Contributing Trace Amounts of DNA to Highly Complex Mixtures Using High-Density SNP Genotyping Microarrays". PLoS Genetics. 4 (8): e1000167. doi:10.1371/journal.pgen.1000167. PMC 2516199. PMID 18769715.
{{cite journal}}
: CS1 maint: unflagged free DOI (link) - ^ PCAST. "Report to the President - Big Data and Privacy: A technological perspective" (PDF). Retrieved 28 March 2016.
- ^ "De-Identification 201". Privacy Analytics. 2015.
- ^ 45 CFR 164.512)
External links
- Simson L. Garfinkel (2015-12-16). "NISTIR 8053, De-Identification of Personal Information" (PDF). NIST. Retrieved 2016-01-03.
- A training series on United States government de-identification standards
- Guidance Regarding Methods for De-identification of Protected Health Information
- Ohm, Paul (2010). "Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization" (PDF). UCLA Law Review. 57: 1701–77.
- Padilla-López, José Ramón; Chaaraoui, Alexandros Andre; Flórez-Revuelta, Francisco (June 2015). "Visual privacy protection methods: A survey" (PDF). Expert Systems with Applications. 42 (9): 4177–4195. doi:10.1016/j.eswa.2015.01.041.
- Chaaraoui, Alexandros; Padilla-López, José; Ferrández-Pastor, Francisco; Nieto-Hidalgo, Mario; Flórez-Revuelta, Francisco (20 May 2014). "A Vision-Based System for Intelligent Monitoring: Human Behaviour Analysis and Privacy by Context". Sensors. 14 (5): 8895–8925. doi:10.3390/s140508895. PMC 4063058. PMID 24854209.
{{cite journal}}
: CS1 maint: unflagged free DOI (link)