# Representativeness heuristic

Jump to: navigation, search

The representativeness heuristic is used when making judgments about the probability of an event under uncertainty.[1] It is one of a group of heuristics (simple rules governing judgment or decision making) proposed by psychologists Amos Tversky and Daniel Kahneman in the early 1970s. They defined representativeness as "the degree to which [an event] (i) is similar in essential characteristics to its parent population, and (ii) reflects the salient features of the process by which it is generated".[2][3] When people rely on representativeness to make judgements, they are likely to judge wrongly because the fact that something is more representative does not make it more likely.[4] This heuristic is used because it is an easy computation.[4] The problem is that people overestimate its ability to accurately predict the likelihood of an event.[5] Thus it can result in neglect of relevant base rates and other cognitive biases.[6][7]

## Determinants of representativeness

### Similarity

When judging the representativeness of a new stimulus/ event people usually pay attention to the degree of similarity between the stimulus/ event and a standard/ process (Kahneman & Tversky, 1972). Nilsson, Juslin and Olsson (2008) found this to be influenced by the exemplar account of memory (concrete examples of a category are stored in memory) so that new instances were classed as representative if highly similar to a category as well as if frequently encountered.will be explained later[8]

### Randomness

Irregularity and local representativeness affect judgments of randomness. Things that do not appear to have any logical sequence are regarded as representative of randomness and thus more likely to occur. E.g. THTHTH as a series of coin tosses would not be considered representative of randomly generated coin tosses as it is too well-ordered (Kahneman & Tversky, 1972).

Local representativeness is an assumption wherein people rely on the law of small numbers, whereby small samples are perceived to represent their population to the same extent as large samples (Tversky and Kahneman, 1971). A small sample which appears randomly distributed would reinforce the belief, under the assumption of local representativeness, that the population is randomly distributed. Conversely, a small sample with a skewed distribution would weaken this belief. If a coin toss is repeated several times and the majority of the results consists of 'heads', the assumption of local representativeness will cause the observer to believe the coin is biased toward 'heads'.

## Examples

### Tom W.

In a study done in 1973, Kahneman and Tversky gave their subjects the following information:

"Tom W. is of high intelligence, although lacking in true creativity. He has a need for order and clarity, and for neat and tidy systems in which every detail finds its appropriate place. His writing is rather dull and mechanical, occasionally enlivened by somewhat corny puns and by flashes of imagination of the sci-fi type. He has a strong drive for competence. He seems to feel little sympathy for other people and does not enjoy interacting with others. Self-centered, he nonetheless has a deep moral sense."

The subjects were then divided into three groups who were given different decision tasks:

• One group of subjects was asked how similar Tom W. was to a student in one of nine types of college graduate majors (business administration, computer science, engineering, humanities/education, law, library science, medicine, physical/life sciences, or social science/social work). Most subjects associated Tom W. with an engineering student, and thought he was least like a student of social science/social work.
• A second group of subjects was asked instead to estimate the probability that Tom W. was a grad student in each of the nine majors. The probabilities were in line with the judgments from the previous group.
• A third group of subjects was asked to estimate the proportion of first-year grad students there were in each of the nine majors.

The second group's probabilities were approximated by how much they thought Tom W. was representative of each of the majors, and less on the base rate probability of being that kind of student in the first place (the third group). Had the subjects approximated their answers by the base rates, their estimated probability that Tom W. was an engineer would have been much lower, as there were few engineering grad students at the time.

### The taxicab problem

In another study done by Tversky and Kahneman, subjects were given the following problem:

"A cab was involved in a hit and run accident at night. Two cab companies, the Green and the Blue, operate in the city. 85% of the cabs in the city are Green and 15% are Blue.
A witness identified the cab as Blue. The court tested the reliability of the witness under the same circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the two colors 80% of the time and failed 20% of the time.
What is the probability that the cab involved in the accident was Blue rather than Green knowing that this witness identified it as Blue?"

Most subjects gave probabilities over 50%, and some gave answers over 80%. The correct answer, found using Bayes' theorem, is lower than these estimates:

• There is a 12% chance (15% times 80%) of the witness correctly identifying a blue cab.
• There is a 17% chance (85% times 20%) of the witness incorrectly identifying a green cab as blue.
• There is therefore a 29% chance (12% plus 17%) the witness will identify the cab as blue.
• This results in a 41% chance (12% divided by 29%) that the cab identified as blue is actually blue.

Representativeness is cited in the similar effect of the gambler's fallacy, the regression fallacy and the conjunction fallacy.

## Consequences of rule violation

The representativeness heuristic violates one of the fundamental properties of probability: extensionality. For example, participants were provided with a description of Linda who resembles a feminist. Then participants were asked to evaluate the probability of her being a feminist, the probability of her being a bank teller, or the probability of being both a bank teller and feminist. Probability theory dictates that the probability of being both a bank teller and feminist (the conjunction of two sets) must be less than or equal to the probability of being either a feminist or a bank teller. However, participants judged the conjunction (bank teller and feminist) as being more probable than being a bank teller alone.[9]

The use of the representativeness heuristic will likely lead to violations of Bayes' Theorem. Bayes' Theorem states:

$P(H|D) = \frac{P(D | H)\, P(H)}{P(D)}.$

However, judgments by representativeness only look at the resemblance between the hypothesis and the data, thus inverse probabilities are equated:

$P(H|D)=P(D|H)$

As can be seen, the base rate P(H) is ignored in this equation, leading to the base rate fallacy. This was explicitly tested by Dawes, Mirels, Gold and Donahue (1993)[10] who had people judge both the base rate of people who had a particular personality trait and the probability that a person who had a given personality trait had another one. For example, participants were asked how many people out of 100 answered true to the question "I am a conscientious person" and also, given that a person answered true to this question, how many would answer true to a different personality question. They found that participants equated inverse probabilities (e.g., $P(conscientious|neurotic)=P(neurotic|conscientious)$) even when it was obvious that they were not the same (the two questions were answered immediately after each other).

## Disjunction fallacy

In addition to extensionality violation, base-rate neglect, and the conjunction fallacy, the use of representativeness heuristic may lead to a disjunction fallacy. From probability theory the disjunction of two events is at least as likely as either of the events individually. For example, the probability of being either a physics or biology major is at least as likely as being a physics major, if not more likely. However, when a personality description (data) seems to be very representative of a physics major (e.g., pocket protector) over a biology major, people judge that it is more likely for this person to be a physics major than a natural sciences major (which is a superset of physics).

Further evidence that the representativeness heuristic may cause the disjunction fallacy comes from Bar-Hillel and Neter (1986).[11] They found that people judge a person who is highly representative of being a statistics major (e.g., highly intelligent, does math competitions) as being more likely to be a statistics major than a social sciences major (superset of statistics), but they do not think that he is more likely to be a Hebrew language major than a humanities major (superset of Hebrew language). Thus, only when the person seems highly representative of a category is that category judged as more probable than its superordinate category. These incorrect appraisals remained even in the face of losing real money in bets on probabilities.

## References

1. ^ Kahneman & Tversky, 1972
2. ^ Kahneman, Tversky, Daniel, Amos (1972). "Subjective probability: A judgment of representativeness". In Kahneman, Slovic, Tversky. Judgment under uncertainty: Heuristics and biases. Cambridge: Cambridge University Press.
3. ^ Kahneman, D (1972). Judgment under uncertainty: Heuristics and biases. Cambridge: Cambridge University Press.
4. ^ a b Tversky & Kahneman, 1982
5. ^ Fortune & Goodie, 2011
6. ^ Judgment under Uncertainty: Heuristics and Biases. Science, New Series, Vol. 185, No. 4157, pp. 1124-1131
7. ^ Human Inference: Strategies and Shortcomings of Social Judgment. Prentice Hall, Englewood Cliffs NJ, pp.115-118
8. ^ Nilsson, Juslin, Olsson, H, P, H (2008). "Exemplars in the mist: The cognitive substrate of the representativeness heuristic". Scandinavian Journal of Psychology 49: 201–212. doi:10.1111/j.1467-9450.2008.00646.x.
9. ^ Tversky, A., Kahneman, D. (1983). Extensional versus intuitive reasoning: The conjunction fallacy in probability judgments. "Psychological Review", 90, 293-315.
10. ^ Dawes, Mirels, Gold, Donahue (1993). Equating inverse probabilities in implicit personality judgments. Psychological Science,4,6, 396-400
11. ^ Bar-Hillel, M., Neter, E. (1986). How alike is it? versus how likely is it?: A disjunction fallacy in probability judgments, Journal of Personality and Social Psychology', 65, 1119-1131'
• Baron, J. (2000). Thinking and Deciding (3d ed.). Cambridge University Press.
• Plous, S. (1993). The Psychology of Judgement and Decision Making New York: McGraw-Hill
• Kahneman, D., & Tversky, A. (1973). On the Psychology of Prediction. Psychological Review, 80, 237-251.
• Tversky, A., & Kahneman, D. (1982). Evidential Impact of Base Rates. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under Uncertainty: Heuristics and Biases. Cambridge: Cambridge University Press.