Profiling (information science)
This involves the use of algorithms or other mathematical techniques that allow the discovery of patterns or correlations in large quantities of data, aggregated in databases. When these patterns or correlations are used to identify or represent people, they can be called profiles. Other than a discussion of profiling technologies or population profiling, the notion of profiling in this sense is not just about the construction of profiles, but also concerns the application of group profiles to individuals, e. g., in the cases of credit scoring, price discrimination, or identification of security risks (Hildebrandt & Gutwirth 2008) (Elmer 2004).
|Look up profile in Wiktionary, the free dictionary.|
Profiling is not simply a matter of computerized pattern-recognition; it enables refined price-discrimination, targeted servicing, detection of fraud, and extensive social sorting. Real-time machine profiling constitutes the precondition for emerging socio-technical infrastructures envisioned by advocates of ambient intelligence, autonomic computing (Kephart & Chess 2003) and ubiquitous computing (Weiser 1991).
One of the most challenging problems of the information society involves dealing with increasing data-overload. With the digitizing of all sorts of content as well as the improvement and drop in cost of recording technologies, the amount of available information has become enormous and increases exponentially. It has thus become important for companies, governments, and individuals to discriminate information from noise, detecting useful or interesting data. The development of profiling technologies must be seen against this background. These technologies are thought[by whom?] to efficiently collect and analyse data in order to find or test knowledge in the form of statistical patterns between data. This process, called Knowledge Discovery in Databases (KDD) (Fayyad, Piatetsky-Shapiro & Smyth 1996), provides the profiler with sets of correlated data usable as "profiles".
The profiling process
The technical process of profiling can be separated in several steps:
- Preliminary grounding: The profiling process starts with a specification of the applicable problem domain and the identification of the goals of analysis.
- Data collection: The target dataset or database for analysis is formed by selecting the relevant data in the light of existing domain knowledge and data understanding.
- Data preparation: The data are preprocessed for removing noise and reducing complexity by eliminating attributes.
- Data mining: The data are analysed with the algorithm or heuristics developed to suit the data, model and goals.
- Interpretation: The mined patterns are evaluated on their relevance and validity by specialists and/or professionals in the application domain (e.g. excluding spurious correlations).
- Application: The constructed profiles are applied, e.g. to categories of persons, to test and fine-tune the algorithms.
- Institutional decision: The institution decides what actions or policies to apply to groups or individuals whose data match a relevant profile.
Data collection, preparation and mining all belong to the phase in which the profile is under construction. However, profiling also refers to the application of profiles, meaning the usage of profiles for the identification or categorization of groups or individual persons. As can be seen in step six (application), the process is circular. There is a feedback loop between the construction and the application of profiles. The interpretation of profiles can lead to the reiterant – possibly real-time – fine-tuning of specific previous steps in the profiling process. The application of profiles to people whose data were not used to construct the profile is based on data matching, which provides new data that allows for further adjustments. The process of profiling is both dynamic and adaptive. A good illustration of the dynamic and adaptive nature of profiling is the Cross-Industry Standard Process for Data Mining (CRISP-DM).
Types of profiling practices
In order to clarify the nature of profiling technologies some crucial distinctions have to be made between different types of profiling practices, apart from the distinction between the construction and the application of profiles. The main distinctions are those between bottom-up and top-down profiling (or supervised and unsupervised learning), and between individual and group profiles.
Supervised and unsupervised learning
Profiles can be classified according to the way they have been generated (Fayyad, Piatetsky-Shapiro & Smyth 1996)(Zarsky 2002-3). On the one hand, profiles can be generated by testing a hypothesized correlation. This is called top-down profiling or supervised learning. This is similar to the methodology of traditional scientific research in that it starts with a hypothesis and consists of testing its validity. The result of this type of profiling is the verification or refutation of the hypothesis. One could also speak of deductive profiling. On the other hand, profiles can be generated by exploring a data base, using the data mining process to detect patterns in the data base that were not previously hypothesized. In a way, this is a matter of generating hypothesis: finding correlations one did not expect or even think of. Once the patterns have been mined, they will enter the loop – described above – and will be tested with the use of new data. This is called unsupervised learning.
Two things are important with regard to this distinction. First, unsupervised learning algorithms seem to allow the construction of a new type of knowledge, not based on hypothesis developed by a researcher and not based on causal or motivational relations but exclusively based on stochastical correlations. Second, unsupervised learning algorithms thus seem to allow for an inductive type of knowledge construction that does not require theoretical justification or causal explanation (Custers 2004).
Some authors claim that if the application of profiles based on computerized stochastical pattern recognition 'works', i.e. allows for reliable predictions of future behaviours, the theoretical or causal explanation of these patterns does not matter anymore (Anderson 2008). However, the idea that 'blind' algorithms provide reliable information does not imply that the information is neutral. In the process of collecting and aggregating data into a database (the first three steps of the process of profile construction), translations are made from real-life events to machine-readable data. These data are then prepared and cleansed to allow for initial computability. Potential bias will have to be located at these points, as well as in the choice of algorithms that are developed. It is not possible to mine a database for all possible linear and non-linear correlations, meaning that the mathematical techniques developed to search for patterns will be determinate of the patterns that can be found. In the case of machine profiling, potential bias is not informed by common sense prejudice or what psychologists call stereotyping, but by the computer techniques employed in the initial steps of the process. These techniques are mostly invisible for those to whom profiles are applied (because their data match the relevant group profiles).
Individual and group profiles
Profiles must also be classified according to the kind of subject they refer to. This subject can either be an individual or a group of people. When a profile is constructed with the data of a single person, this is called individual profiling (Jaquet-Chiffelle 2008). This kind of profiling is used to discover the particular characteristics of a certain individual, to enable unique identification or the provision of personalized services. However, personalized servicing is most often also based on group profiling, which allows categorisation of a person as a certain type of person, based on the fact that her profile matches with a profile that has been constructed on the basis of massive amounts of data about massive numbers of other people. A group profile can refer to the result of data mining in data sets that refer to an existing community that considers itself as such, like a religious group, a tennis club, a university, a political party etc. In that case it can describe previously unknown patterns of behaviour or other characteristics of such a group (community). A group profile can also refer to a category of people that do not form a community, but are found to share previously unknown patterns of behaviour or other characteristics (Custers 2004). In that case the group profile describes specific behaviours or other characteristics of a category of people, like for instance women with blue eyes and red hair, or adults with relatively short arms and legs. These categories may be found to correlate with health risks, earning capacity, mortality rates, credit risks, etc.
If an individual profile is applied to the individual that it was mined from, then that is direct individual profiling. If a group profile is applied to an individual whose data match the profile, then that is indirect individual profiling, because the profile was generated using data of other people. Similarly, if a group profile is applied to the group that it was mined from, then that is direct group profiling (Jaquet-Chiffelle 2008). However, in as far as the application of a group profile to a group implies the application of the group profile to individual members of the group, it makes sense to speak of indirect group profiling, especially if the group profile is non-distributive.
Distributive and non-distributive profiling
Group profiles can also be divided in terms of their distributive character (Vedder 1999). A group profile is distributive when its properties apply equally to all the members of its group: all bachelors are unmarried, or all persons with a specific gene have 80% chance to contract a specific disease. A profile is non-distributive when the profile does not necessarily apply to all the members of the group: the group of persons with a specific postal code have an average earning capacity of XX, or the category of persons with blue eyes has an average chance of 37% to contract a specific disease. Note that in this case the chance of an individual to have a particular earning capacity or to contract the specific disease will depend on other factors, e.g. sex, age, background of parents, previous health, education. It should be obvious that, apart from tautological profiles like that of bachelors, most group profiles generated by means of computer techniques are non-distributive. This has far-reaching implications for the accuracy of indirect individual profiling based on data matching with non-distributive group profiles. Quite apart from the fact that the application of accurate profiles may be unfair or cause undue stigmatisation, most group profiles will not be accurate.
Profiling technologies can be applied in a variety of different domains and for a variety of purposes. These profiling practices will all have different effect and raise different issues.
Knowledge about the behaviour and preferences of customers is of great interest to the commercial sector. On the basis of profiling technologies, companies can predict the behaviour of different types of customers. Marketing strategies can then be tailored to the people fitting these types. Examples of profiling practices in marketing are customers loyalty cards, customer relationship management in general, and personalized advertising.
In the financial sector, institutions use profiling technologies for fraud prevention and credit scoring. Banks want to minimise the risks in giving credit to their customers. On the basis of extensive group profiling customers are assigned a certain scoring value that indicates their creditworthiness. Financial institutions like banks and insurance companies also use group profiling to detect fraud or money-laundering. Databases with transactions are searched with algorithms to find behaviours that deviate from the standard, indicating potentially suspicious transactions.
In the context of employment, profiles can be of use for tracking employees by monitoring their online behaviour, for the detection of fraud by them, and for the deployment of human resources by pooling and ranking their skills. (Leopold & Meints 2008) .
Profiling can also be used to support people at work, and also for learning, by intervening in the design of adaptive hypermedia systems personalising the interaction. For instance, this can be useful for supporting the management of attention (Nabeth 2008).
In forensic science, the possibility exists of linking different databases of cases and suspects and mining these for common patterns. This could be used for solving existing cases or for the purpose of establishing risk profiles of potential suspects (Geradts & Sommer 2008) (Harcourt 2006).
Risks and issues
Profiling technologies have raised a host of ethical, legal and other issues including privacy, equality, due process, security and liability. Numerous authors have warned against the affordances of a new technological infrastructure that could emerge on the basis of semi-autonomic profiling technologies (Lessig 2006)(Solove 2004)(Schwartz 2000).
Privacy is one of the principal issues raised. Profiling technologies make possible a far-reaching monitoring of an individual's behaviour and preferences. Profiles may reveal personal or private information about individuals that they might not even be aware of themselves (Hildebrandt & Gutwirth 2008).
Profiling technologies are by their very nature discriminatory tools. They allow unparalleled kinds of social sorting and segmentation which could have unfair effects. The people that are profiled may have to pay higher prices, they could miss out on important offers or opportunities, and they may run increased risks because catering to their needs is less profitable (Lyon 2003). In most cases they will not be aware of this, since profiling practices are mostly invisible and the profiles themselves are often protected by intellectual property or trade secret. This poses a threat to the equality of and solidarity of citizens. On a larger scale, it might cause the segmentation of society.
One of the problems underlying potential violations of privacy and non-discrimination is that the process of profiling is more often than not invisible for those that are being profiled. This creates difficulties in that it becomes hard, if not impossible, to contest the application of a particular group profile. This disturbs principles of due process: if a person has no access to information on the basis of which she is withheld benefits or attributed certain risks, she cannot contest the way she is being treated (Steinbock 2005).
Profiles can be used against people when they end up in the hands of people who are not entitled to access or use them. An important issue related to these breaches of security is identity theft.
When the application of profiles causes harm, the liability for this harm has to be determined who is to be held accountable. Is the software programmer, the profiling service provider, or the profiled user to be held accountable? This issue of liability is especially complex in the case the application and decisions on profiles have also become automated like in Autonomic Computing or ambient intelligence decisions of automated decisions based on profiling.
- Anderson, Chris (2008). "The End of Theory: The Data Deluge Makes the Scientific Method Obsolete". Wired Magazine 16 (7).
- Custers, B.H.M. (2004). "The Power of Knowledge". Tilburg:Wolf Legal Publishers.
- Elmer, G. (2004). "Profiling Machines. Mapping the Personal Information Economy". MIT Press.
- Fayyad, U.M.; Piatetsky-Shapiro, G.; Smyth, P. (1996). "From Data Mining to Knowledge Discovery in Databases". AI Magazine 17 (3): 37–54.
- Geradts, Zeno; Sommer, Peter (2008). "D6.7c: Forensic Profiling". FIDIS Deliverables 6 (7c).
- Harcourt, B. E. (2006). "Against Prediction. Profiling, Policing, and Punishing in an Actuarial Age". The University of Chicago Press, Chicago and London.
- Hildebrandt, Mireille; Gutwirth, Serge (2008). Profiling the European Citizen. Cross Disciplinary Perspectives. Springer, Dordrecht. doi:10.1007/978-1-4020-6914-7. ISBN 978-1-4020-6913-0.
- Jaquet-Chiffelle, David-Olivier (2008). "Reply: Direct and Indirect Profiling in the Light of Virtual Persons. To: Defining Profiling: A New Type of Knowledge?". In Hildebrandt, Mireille; Gutwirth, Serge. Profiling the European Citizen. Springer Netherlands. pp. 17–45. doi:10.1007/978-1-4020-6914-7_2.
- Kephart, J. O.; Chess, D. M. (2003). "The Vision of Autonomic Computing". Computer 36 (1 January): 96–104. doi:10.1109/MC.2003.1160055.
- Leopold, N.; Meints, M. (2008). "Profiling in Employment Situations (Fraud)". In Hildebrandt, Mireille; Gutwirth, Serge. Profiling the European Citizen. Springer Netherlands. pp. 217–237. doi:10.1007/978-1-4020-6914-7_12.
- Lessig, L. (2006). "Code 2.0". Basic Books, New York.
- Lyon, D. (2003). "Surveillance as Social Sorting: Privacy, Risk, and Digital Discrimination". Routledge.
- Nabeth, Thierry (2008). "User Profiling for Attention Support for School and Work". In Hildebrandt, Mireille; Gutwirth, Serge. Profiling the European Citizen. Springer Netherlands. pp. 185–200. doi:10.1007/978-1-4020-6914-7_10.
- Schwartz, P. (2000). "Beyond Lessig's Code for the Internet Privacy: Cyberspace Filters, Privacy-Control and Fair Information Practices". Wis. Law Review 743: 743–788.
- Solove, D.J. (2004). The Digital Person. Technology and Privacy in the Information Age. New York, New York University Press.
- Steinbock, D. (2005). "Data Matching, Data Mining, and Due Process". Georgia Law Review 40 (1): 1–84.
- Vedder, A. (1999). "KDD: The Challenge to Individualism". Ethics and Information Technology 1 (4): 275–281. doi:10.1023/A:1010016102284.
- Weiser, M. (1991). "The Computer for the Twenty-First Century". Scientific American 265 (3): 94–104. doi:10.1038/scientificamerican0991-94.
- Zarsky, T. (2002-3). ""Mine Your Own Business!": Making the Case for the Implications of the Data Mining or Personal Information in the Forum of Public Opinion". Yale Journal of Law and Technology 5 (4): 17–47. Check date values in:
Notes and other references
- ISTAG (2001), Scenarios for Ambient Intelligence in 2010, Information Society Technology Advisory Group
- Canhoto, A.I. (2007) Profiling behaviour: the social construction of categories in the detection of financial crime, dissertation at London School of Economics, at http://www.lse.ac.uk/collections/informationSystems/pdf/theses/canhoto.pdf
- Odlyzko, A. (2003), Privacy, economics, and price discrimination on the Internet, A. M. Odlyzko. ICEC2003: Fifth International Conference on Electronic Commerce, N. Sadeh, ed., ACM, pp. 355–366, available at http://www.dtc.umn.edu/~odlyzko/doc/privacy.economics.pdf
- Gandy, O. (2002) Data Mining and Surveillance in the post 9/11 environment, Presentation at IAMCR, Barcelona, at http://www.asc.upenn.edu/usr/ogandy/IAMCRdatamining.pdf