Categorical perception

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Categorical perception is the experience of percept invariances in sensory phenomena that can be varied along a continuum.

Multiple views of a face, for example, are mapped onto a common identity, visually distinct objects such as cars are mapped into the same category and variable speech sounds are perceived as discrete phonemes. Within a particular part of the continuum, the percepts are perceived as the same, with a sharp change of perception at the position of the continuum where there is identity change. Categorical perception is opposed to continuous perception, the perception of different sensory phenomena as being located on a smooth continuum.

How the neural systems in the brain engages in this many-to-one mapping is a major issue in cognitive neuroscience. Categorical perception (CP) can be inborn or can be induced by learning. Initially it was taken to be peculiar to speech and color perception. However CP turns out to general, and related to how neural networks in our brains detect the features that allow us to sort the things in the world into separate categories by "warping" perceived similarities and differences so that they compress some things into the same category and separate others into different ones.

An area in the left prefrontal cortex has been localized as the place in the brain responsible for phonetic and possibly other types of categorical perception.[1]


A category,[2] or kind, is a set of things. Membership in the category may be (1) all-or-none, as with "bird": Something either is a bird or it isn't a bird; a penguin is 100% bird, a dog is 100% not-bird. In this case we would call the category "categorical." Or membership might be (2) a matter of degree, as with "big": Some things are more big and some things are less big. In this case the category is "continuous" (or rather, degree of membership corresponds to some point along a continuum). There are range or context effects as well: elephants are relatively big in the context of animals, relatively small in the context of bodies in general, if we include planets.

Many categories, however, particularly concrete sensori-motor categories (things we can see and touch), are a mixture of the two: categorical at an everyday level of magnification, but continuous at a more microscopic level. An example of this is color categories: Central reds are clearly reds, and not shades of yellow. But in the orange region of the spectral continuum, red/yellow is a matter of degree; context and contrast effects can also move these regions around somewhat. Perhaps even with "bird," an artist or genetic-engineer could design intermediate cases in which their "birdness" was only a matter of degree.

Resolving the "blooming, buzzing confusion"[edit]

Categories are important because they determine how we see and act upon the world. As William James noted, we do not see a continuum of "blooming, buzzing confusion" but an orderly world of discrete objects. Some of these categories are "prepared" in advance by evolution: The frog's brain is born already able to detect "flies"; it needs only normal exposure rather than any special learning in order to recognize and catch them. Humans have such innate category-detectors too: The human face itself is probably an example. So too are our basic color categories, although one implication of the Sapir–Whorf hypothesis (Whorf 1956; also called the "linguistic relativity" hypothesis) might be that colors are determined by how culture and language happen to subdivide the spectrum.

But if one opens up a dictionary at random and picks out a content word, chances are that it names a category we have learned to detect, rather than one that our brains were innately prepared in advance by evolution to detect. The generic human face may be an innate category for us, perhaps even the various basic emotions it can express, but surely all the specific people we know and can name are not. "Red" and "yellow" may be inborn, but "scarlet" and "crimson"?

The motor theory of speech perception[edit]

And what about the very building blocks of the language we use to name categories: Are our speech-sounds —/ba/, /da/, /ga/ —innate or learned? The first question we must answer about them is whether they are categorical categories at all, or merely arbitrary points along a continuum. It turns out that if one analyzes the sound spectrogram of ba and pa, for example, both are found to lie along an acoustic continuum called "voice-onset-time." With a technique similar to the one used in "morphing" visual images continuously into one another, it is possible to "morph" a /ba/ gradually into a /pa/ and beyond by gradually increasing the voicing parameter.

Alvin Liberman and colleagues[3] (he did not talk about voice onset time in that paper) reported that when people listen to sounds that vary along the voicing continuum, they hear only /ba/s and /pa/s, nothing in between. This effect—in which a perceived quality jumps abruptly from one category to another at a certain point along a continuum, instead of changing gradually—he dubbed "categorical perception" (CP). He suggested that CP was unique to speech, that CP made speech special, and, in what came to be called "the motor theory of speech perception," he suggested that CP's explanation lay in the anatomy of speech production.

According to the (now abandoned) motor theory of speech perception, the reason people perceive an abrupt change between /ba/ and /pa/ is that the way we hear speech sounds is influenced by how people produce them when they speak. What is varying along this continuum is voice-onset-time: the "b" in /ba/ is voiced and the "p" in /pa/ is not. But unlike the synthetic "morphing" apparatus, people's natural vocal apparatus is not capable of producing anything in between ba and pa. So when one hears a sound from the voicing continuum, their brain perceives it by trying to match it with what it would have had to do to produce it. Since the only thing they can produce is /ba/ or /pa/, they will perceive any of the synthetic stimuli along the continuum as either /ba/ or /pa/, whichever it is closer to. A similar CP effect is found with ba/da; these too lie along a continuum acoustically, but vocally, /ba/ is formed with the two lips, /da/ with the tip of the tongue and the alveolar ridge, and our anatomy does not allow any intermediates.

The motor theory of speech perception explained how speech was special and why speech-sounds are perceived categorically: sensory perception is mediated by motor production. Wherever production is categorical, perception will be categorical; where production is continuous, perception will be continuous. And indeed vowel categories like a/u were found to be much less categorical than ba/pa or ba/da.

Acquired distinctiveness[edit]

If motor production mediates sensory perception, then one assumes that this CP effect is a result of learning to produce speech. Eimas et al. (1971), however, found that infants already have speech CP before they begin to speak. Perhaps, then, it is an innate effect, evolved to "prepare" us to learn to speak.[4] But Kuhl (1987) found that chinchillas also have "speech CP" even though they never learn to speak, and presumably did not evolve to do so.[5] Lane (1965) went on to show that CP effects can be induced by learning alone, with a purely sensory (visual) continuum in which there is no motor production discontinuity to mediate the perceptual discontinuity.[6] He concluded that speech CP is not special after all, but merely a special case of Lawrence's classic demonstration that stimuli to which you learn to make a different response become more distinctive and stimuli to which you learn to make the same response become more similar.

It also became clear that CP was not quite the all-or-none effect Liberman had originally thought it was: It is not that all /pa/s are indistinguishable and all /ba/s are indistinguishable: We can hear the differences, just as we can see the differences between different shades of red. It is just that the within-category differences (pa1/pa2 or red1/red2) sound/look much smaller than the between-category differences (pa2/ba1 or red2/yellow1), even when the size of the underlying physical differences (voicing, wavelength) are actually the same.

The modern definition[edit]

This evolved into the contemporary definition of CP, which is no longer peculiar to speech or dependent on the motor theory: CP occurs whenever perceived within-category differences are compressed and/or between-category differences are separated, relative to some baseline of comparison. The baseline might be the actual size of the physical differences involved, or, in the case of learned CP, it might be the perceived similarity or discriminability within and between categories before the categories were learned, compared to after.

The typical learned CP experiment would be the following: A set of stimuli is tested (usually in pairs) for similarity or discriminability. In the case of similarity, Multidimensional scaling might be used to scale the rated pairwise similarity of the set of stimuli. In the case of discriminability, same/different judgments and signal detection analysis might be used to estimate the pairwise discriminability of a set of stimuli. Then the same subjects or a different set are trained, using trial and error and corrective feedback, to sort the stimuli into two or more categories. After the categorization has been learned, similarity or discriminability are tested again, and compared against the untrained data. If there is significant within-category compression and/or between-category separation, this is operationally defined as CP.[7]

Identification and discrimination tasks[edit]

The study of categorical perception often uses experiments involving discrimination and identification tasks in order to categorize participants' perceptions of sounds. Voice onset time (VOT) is measured along a continuum rather than a binary. English bilabial stops /b/ and /p/ are voiced and voiceless counterparts of the same place and manner of articulation, yet native speakers distinguish the sounds primarily by where they fall on the VOT continuum. Participants in these experiments establish clear phoneme boundaries on the continuum; two sounds with different VOT will be perceived as the same phoneme if on the same side of the boundary.[8] Participants take longer to discriminate between two sounds falling in the same category of VOT than between two on opposite sides of the phoneme boundary, even if the difference in VOT is greater between the two in the same category.[9]


In a categorical perception identification task, participants often must identify stimuli, such as speech sounds. An experimenter testing the perception of the VOT boundary between /p/ and /b/ may play several sounds falling on various parts of the VOT continuum and ask volunteers whether they hear each sound as /p/ or /b/.[10] In such experiments, sounds on one side of the boundary are heard almost universally as /p/ and on the other as /b/. Stimuli on or near the boundary take longer to identify and are reported differently by different volunteers, but are perceived as either /b/ or /p/, rather than as a sound somewhere in the middle.[8]


A simple AB discrimination task presents participants with two options and participants must decide if they are identical.[10] Predictions for a discrimination task in an experiment are often based on the preceding identification task. An ideal discrimination experiment validating categorical perception of stop consonants would result in volunteers more often correctly discriminating stimuli that fall on opposite sides of the boundary, while discriminating at chance level on the same side of the boundary.[9]

In an ABX discrimination task, volunteers are presented with three stimuli. A and B must be distinct stimuli and volunteers decide which of the two the third stimulus X matches. This discrimination task is much more common than a simple AB task.[10][9]

The Whorf hypothesis[edit]

According to the Sapir–Whorf hypothesis (of which Lawrence's acquired similarity/distinctiveness effects would simply be a special case), language affects the way that people perceive the world. For example, colors are perceived categorically only because they happen to be named categorically: Our subdivisions of the spectrum are arbitrary, learned, and vary across cultures and languages. But Berlin & Kay (1969) suggested that this was not so: Not only do most cultures and languages subdivide and name the color spectrum the same way, but even for those who don't, the regions of compression and separation are the same.[11] We all see blues as more alike and greens as more alike, with a fuzzy boundary in between, whether or not we have named the difference. This view has been challenged in a review article by Regier and Kay (2009) who discuss a distinction between the questions "1. Do color terms affect color perception?" and "2. Are color categories determined by largely arbitrary linguistic convention?". They report evidence that linguistic categories, stored in the left hemisphere of the brain for most people, do affect categorical perception but primarily in the right-eye visual field, and that this effect is eliminated with a concurrent verbal interference task.[12]

Universalism, in contrasts to the Sapir-Whorf hypothesis, posits that perceptual categories are innate, and are unaffected by the language that one speaks.[13]


Support of the Sapir-Whorf hypothesis describes instances in which speakers of one language demonstrate categorical perception in a way that is different from speakers of another language. Examples of such evidence are provided below:

Regier and Kay (2009) reported evidence that linguistic categories affect categorical perception primarily in the right-eye visual field.[14] The right-eye visual field is controlled by the left hemisphere of the brain, which also controls language faculties. Davidoff (2001) presented evidence that in color discrimination tasks, native English speakers discriminated easier between color stimuli across a determined blue-green boundary than within the same side, but did not show CP when given the same task with Berinmo "nol" and "wor"; Berinmo speakers performed oppositely.[15]

A popular theory in current research is "weak-Whorfianism,' which is the theory that although there is a strong universal component to perception, cultural differences still have an impact. For example, a 1998 study found that while there was evidence of universal perception of color between speakers of Setswana and English, there were also marked differences between the two language groups.[16]


Criticism of the Sapir-Whorf hypothesis should give evidence that objects that are categorically perceived are defined in the same ways in different languages—that would give evidence that language is not affecting the perception of the speakers of a language. People who agree with these criticisms can be considered Universalists, who believe in innate perception of categories. Examples of research that supports the Universalist view is given below:

According to Berlin & Kay (1969), most cultures and languages subdivide and name the color spectrum the same way, and the regions of compression and separation are the same for those that don't.[17] All blues are seen alike and all greens are seen alike, with a fuzzy boundary in between, regardless of the color terms in a speaker's native language.

Experimental research has explored whether children demonstrate categorical perception differently than adults. Eimas et al. (1971) investigated infants' ability to discriminate between two sounds after being habituated to one.[18] They found that the infants were able to react to the difference between the two sounds (/p/ and /b/) faster when they belonged to different categories than when the two sounds belonged to the same categories. The sounds from the same category would both be categorized as the same sound for adult speakers of English. The main result is that infants demonstrate the same category boundaries as adults, even though they do not yet produce the sounds, which evidences that category boundaries are innate.

Other research has more recently been done comparing speakers of different languages. Franklin et al. (2004) found that children's knowledge about terms for color does not affect their perception of colors.[19] This study found that both English and Himba speaking toddlers showed no difference in ability to categorize colors, regardless of the language that they were speaking and how well they knew the color terms of that language.

Evolved and Learned CP[edit]

Evolved CP[edit]

First, back to vowels. The signature of CP is within-category compression and/or between-category separation. The size of the CP effect is merely a scaling factor; it is this compression/separation "accordion effect," that is CP's distinctive feature. In this respect, the "weaker" CP effect for vowels, whose motor production is continuous rather than categorical, but whose perception is by this criterion categorical, is every bit as much of a CP effect as the ba/pa and ba/da effects. But, as with colors, it looks as if the effect is an innate one: Our sensory category detectors for both color and speech sounds are born already "biased" by evolution: Our perceived color and speech-sound spectrum is already "warped" with these compression/separations.

Learned CP[edit]

The Lane/Lawrence demonstrations, lately replicated and extended by Goldstone (1994), showed that CP can be induced by learning alone.[20] There are also the countless categories cataloged in our dictionaries that, according to categorical perception, are unlikely to be inborn. Nativist theorists such as Fodor [1983] have sometimes seemed to suggest that all of our categories are inborn.[21] There are recent demonstrations that, although the primary color and speech categories may be inborn, their boundaries can be modified or even lost as a result of learning, and weaker secondary boundaries can be generated by learning alone.[22]

In the case of innate CP, our categorically biased sensory detectors pick out their prepared color and speech-sound categories far more readily and reliably than if our perception had been continuous.

Learning is a cognitive process that results in a relatively permanent change in behavior. Learning can influence perceptual processing.[23] Learning influences perceptual processing by altering the way in which an individual perceives a given stimulus based on prior experience or knowledge. This means that the way something is perceived is changed by how it was seen, observed, or experienced before. The effects of learning can be studied in categorical perception by looking at the processes involved.[24]

Learned categorical perception can be divided into different processes through some comparisons. The processes can be divided into between category and within category groups of comparison .[25] Between category groups are those that compare between two separate sets of objects. Within category groups are those that compare within one set of objects. Between subjects comparisons lead to a categorical expansion effect. A categorical expansion occurs when the classifications and boundaries for the category become broader, encompassing a larger set of objects. In other words, a categorical expansion is when the "edge lines" for defining a category become wider. Within subjects comparisons lead to a categorical compression effect. A categorical compression effect corresponds to the narrowing of category boundaries to include a smaller set of objects (the "edge lines" are closer together).[25] Therefore, between category groups lead to less rigid group definitions whereas within category groups lead to more rigid definitions.

Another method of comparison is to look at both supervised and unsupervised group comparisons. Supervised groups are those for which categories have been provided, meaning that the category has been defined previously or given a label; unsupervised groups are groups for which categories are created, meaning that the categories will be defined as needed and are not labeled.[26]

In studying learned categorical perception, themes are important. Learning categories is influenced by the presence of themes. Themes increase quality of learning. This is seen especially in cases where the existing themes are opposite.[26] In learned categorical perception, themes serve as cues for different categories. They assist in designating what to look for when placing objects into their categories. For example, when perceiving shapes, angles are a theme. The number of angles and their size provide more information about the shape and cue different categories. Three angles would cue a triangle, whereas four might cue a rectangle or a square. Opposite to the theme of angles would be the theme of circularity. The stark contrast between the sharp contour of an angle and the round curvature of a circle make it easier to learn.

Similar to themes, labels are also important to learned categorical perception.[25] Labels are “noun-like” titles that can encourage categorical processing with a focus on similarities.[25] The strength of a label can be determined by three factors: analysis of affective (or emotional) strength, permeability (the ability to break through) of boundaries, and a judgment (measurement of rigidity) of discreteness.[25] Sources of labels differ, and, similar to unsupervised/supervised categories, are either created or already exist.[25][26] Labels affect perception regardless of their source. Peers, individuals, experts, cultures, and communities can create labels. The source doesn’t appear to matter as much as mere presence of a label, what matters is that there is a label. There is a positive correlation between strength of the label (combination of three factors) and the degree to which the label affects perception, meaning that the stronger the label, the more the label affects perception.[25]

Cues used in learned categorical perception can foster easier recall and access of prior knowledge in the process of learning and using categories.[26] An item in a category can be easier to recall if the category has a cue for the memory. As discussed, labels and themes both function as cues for categories, and, therefore, aid in the memory of these categories and the features of the objects belonging to them.

There are several brain structures at work that promote learned categorical perception. The areas and structures involved include: neurons, the prefrontal cortex, and the inferotemporal cortex.[24][27] Neurons in general are linked to all processes in the brain and, therefore, facilitate learned categorical perception. They send the messages between brain areas and facilitate the visual and linguistic processing of the category. The prefrontal cortex is involved in “forming strong categorical representations.”[24] The inferotemporal cortex has cells that code for different object categories and are turned along diagnostic category dimensions, areas distinguishing category boundaries.[24]

The learning of categories and categorical perception can be improved through adding verbal labels, making themes relevant to the self, making more separate categories, and by targeting similar features that make it easier to form and define categories.

Learned categorical perception occurs not only in human species but has been demonstrated in animal species as well. Studies have targeted categorical perception using humans, monkeys, rodents, birds, frogs.[27][28] These studies have led to numerous discoveries. They focus primarily on learning the boundaries of categories, where inclusion begins and ends, and they support the hypothesis that categorical perception does have a learned component.

Computational and neural models[edit]

Computational modeling (Tijsseling & Harnad 1997; Damper & Harnad 2000) has shown that many types of category-learning mechanisms (e.g. both back-propagation and competitive networks) display CP-like effects.[29][30] In back-propagation nets, the hidden-unit activation patterns that "represent" an input build up within-category compression and between-category separation as they learn; other kinds of nets display similar effects. CP seems to be a means to an end: Inputs that differ among themselves are "compressed" onto similar internal representations if they must all generate the same output; and they become more separate if they must generate different outputs. The network's "bias" is what filters inputs onto their correct output category. The nets accomplish this by selectively detecting (after much trial and error, guided by error-correcting feedback) the invariant features that are shared by the members of the same category and that reliably distinguish them from members of different categories; the nets learn to ignore all other variation as irrelevant to the categorization.

Brain basis[edit]

Neural data provide correlates of CP and of learning.[31] Differences between event-related potentials recorded from the brain have been found to be correlated with differences in the perceived category of the stimulus viewed by the subject. Neural imaging studies have shown that these effects are localized and even lateralized to certain brain regions in subjects who have successfully learned the category, and are absent in subjects who have not.[32][33]

Categorical perception is identified with the left prefrontal cortex with this showing such perception for speech units while this is not by posterior areas earlier in their processing such as areas in the left superior temporal gyrus.[1]


Both innate and learned CP are sensorimotor effects: The compression/separation biases are sensorimotor biases, and presumably had sensorimotor origins, whether during the sensorimotor life-history of the organism, in the case of learned CP, or the sensorimotor life-history of the species, in the case of innate CP. The neural net I/O models are also compatible with this fact: Their I/O biases derive from their I/O history. But when we look at our repertoire of categories in a dictionary, it is highly unlikely that many of them had a direct sensorimotor history during our lifetimes, and even less likely in our ancestors' lifetimes. How many of us have seen a unicorn in real life? We have seen pictures of them, but what had those who first drew those pictures seen? And what about categories I cannot draw or see (or taste or touch): What about the most abstract categories, such as goodness and truth?

Some of our categories must originate from another source than direct sensorimotor experience, and here we return to language and the Whorf Hypothesis: Can categories, and their accompanying CP, be acquired through language alone? Again, there are some neural net simulation results suggesting that once a set of category names has been "grounded" through direct sensorimotor experience, they can be combined into Boolean combinations (man = male & human) and into still higher-order combinations (bachelor = unmarried & man) which not only pick out the more abstract, higher-order categories much the way the direct sensorimotor detectors do, but also inherit their CP effects, as well as generating some of their own. Bachelor inherits the compression/separation of unmarried and man, and adds a layer of separation/compression of its own.[34][35]

These language-induced CP-effects remain to be directly demonstrated in human subjects; so far only learned and innate sensorimotor CP have been demonstrated.[36][37] The latter shows the Whorfian power of naming and categorization, in warping our perception of the world. That is enough to rehabilitate the Whorf Hypothesis from its apparent failure on color terms (and perhaps also from its apparent failure on eskimo snow terms[38]), but to show that it is a full-blown language effect, and not merely a vocabulary effect, it will have to be shown that our perception of the world can also be warped, not just by how things are named but by what we are told about them.


Emotions are an important characteristic of the human species. An emotion is an abstract concept that is most easily observed by looking at facial expressions. Emotions and their relation to categorical perception are often studied using facial expressions.[39][40][41][42][43] Faces contain a large amount of valuable information.[41]

Emotions are divided into categories because they are discrete from one another. Each emotion entails a separate and distinct set of reactions, consequences, and expressions. The feeling and expression of emotions is a natural occurrence, and, it is actually a universal occurrence for some emotions. There are six basic emotions that are considered universal to the human species across age, gender, race, country, and culture and that are considered to be categorically distinct. These six basic emotions are: happiness, disgust, sadness, surprise, anger, and fear.[42] According to the discrete emotions approach, people experience one emotion and not others, rather than a blend.[42] Categorical perception of emotional facial expressions does not require lexical categories.[42] Of these six emotions, happiness is the most easily identified.

The perception of emotions using facial expressions reveals slight gender differences[39] based on the definition and boundaries (essentially, the “edge line” where one emotion ends and a subsequent emotion begins) of the categories. The emotion of anger is perceived easier and quicker when it is displayed by males. However, the same effects are seen in the emotion of happiness when portrayed by women.[39] These effects are essentially observed because the categories of the two emotions (anger and happiness) are more closely associated with other features of these specific genders.

Although a verbal label is provided to emotions, it is not required to categorically perceive them. Before language in infants, they can distinguish emotional responses. The categorical perception of emotions is by a "hardwired mechanism".[42] Additional evidence exists showing the verbal labels from cultures that may not have a label for a specific emotion but can still categorically perceive it as its own emotion, discrete and isolated from other emotions.[42] The perception of emotions into categories has also been studied using the tracking of eye movements which showed an implicit response with no verbal requirement because the eye movement response required only the movement and no subsequent verbal response.[40]

The categorical perception of emotions is sometimes a result of joint processing. Other factors may be involved in this perception. Emotional expression and invariable features (features that remain relatively consistent) often work together.[41] Race is one of the invariable features that contribute to categorical perception in conjunction with expression. Race can also be considered a social category.[41] Emotional categorical perception can also be seen as a mix of categorical and dimensional perception. Dimensional perception involves visual imagery. Categorical perception occurs even when processing is dimensional.[43]

See also[edit]


  1. ^ a b Myers, EB; Blumstein, SE; Walsh, E; Eliassen, J.; Batton, D; Kirk, JS (2009). "Inferior frontal regions underlie the perception of phonetic category invariance". Psychol Sci. 20 (7): 895–903. PMC 2851201Freely accessible. PMID 19515116. doi:10.1111/j.1467-9280.2009.02380.x. 
  2. ^ Harnad, Stevan (2005). "To Cognize is to Categorize: Cognition is Categorization". In C Lefebvre; H. Cohen. Handbook of Categorization in Cognitive Science. New York: Elsevier Press. 
  3. ^ Liberman, A. M., Harris, K. S., Hoffman, H. S. & Griffith, B. C. (1957). "The discrimination of speech sounds within and across phoneme boundaries". Journal of Experimental Psychology. 54 (5): 358–368. PMID 13481283. doi:10.1037/h0044417. 
  4. ^ Eimas, P.D.; Siqueland, E.R.; Jusczyk, P.W. & Vigorito, J. (1971). "Speech perception in infants". Science. 171 (3968): 303–306. PMID 5538846. doi:10.1126/science.171.3968.303. 
  5. ^ Kuhl, P. K. (1987). "The Special-Mechanisms Debate in Speech Perception: Nonhuman Species and Nonspeech Signals". In S. Harnad. Categorical perception: The groundwork of Cognition. New York: Cambridge University Press. 
  6. ^ Lane, H. (1965). "The motor theory of speech perception: A critical review". Psychological Review. 72 (4): 275–309. PMID 14348425. doi:10.1037/h0021986. 
  7. ^ Harnad, S. (ed.) (1987). Categorical Perception: The Groundwork of Cognition. New York: Cambridge University Press. 
  8. ^ a b Fernández, Eva; Cairns, Helen (2011). Fundamentals of Psycholinguistics. West Sussex, United Kingdom: Wiley-Blackwell. pp. 175–179. ISBN 978-1-4051-9147-0. 
  9. ^ a b c Repp, Bruno (1984). "Categorical Perception: Issues, Methods, Findings" (PDF). SPEECH AND LANGUAGE: Advances in Basic Research and Practice. 10: 243–335. 
  10. ^ a b c Brandt, Jason; Rosen, Jeffrey (1980). "Auditory Phonemic Perception in Dyslexia: Categorical Identification and Discrimination of Stop Consonants" (PDF). Brain and Language. 9: 324–337. 
  11. ^ Berlin, B.; Kay, P. (1969). Basic color terms: Their universality and evolution. Berkeley: University of California Press. ISBN 1-57586-162-3. 
  12. ^ Regier, T.; Kay, P. (2009). "Language, thought, and color: Whorf was half right.". Trends in Cognitive Sciences. 13 (10): 439–447. PMID 19716754. doi:10.1016/j.tics.2009.07.001. 
  13. ^ Penn, Julia, M. (1972). Linguistic relativity versus innate ideas: The origins of the Sapir-Whorf hypothesis in German thought. Walter de Gruyter. p. 11. 
  14. ^ Regier, T.; Kay, P. (2009). "Language, thought, and color: Whorf was half right.". Trends in Cognitive Sciences. 13 (10): 439–447. PMID 19716754. doi:10.1016/j.tics.2009.07.001. 
  15. ^ Davidoff, Jules (September 2001). "Language and perceptual categorisation" (PDF). Trends in Cognitive Sciences. 5: 382–387. 
  16. ^ Davies, I.R.L.; Sowden, P.T.; Jerrett, D.T.; Jerrett, T.; Corbett, G.G. (1998). "A cross-cultural study of English and Setswana speakers on a colour triads task: A test of the Sapir-Whorf hypothesis". British Journal of Psychology. 89: 1–15. 
  17. ^ Berlin, B.; Kay, P. (1969). Basic color terms: Their universality and evolution. Berkeley: University of California Press. ISBN 1-57586-162-3. 
  18. ^ Eimas, P. D.; Siqueland, E.R.; Jusczyk, P.; Vigorito, J. (Jan 1971). "Speech Perception in Infants". Science. 171: 303–306. 
  19. ^ Franklin, A.; Clifford, A.; Williamson, I.D. (2004). "Color term knowledge does not affect categorical perception of color in toddlers". J. Experimental Child Psychology. 90: 114–141. 
  20. ^ Goldstone, R. L. (1994). "Influences of categorization on perceptual discrimination". Journal of Experimental Psychology. General. 123 (2): 178–200. PMID 8014612. doi:10.1037/0096-3445.123.2.178. 
  21. ^ Fodor, J. (1983). The modularity of mind. MIT Press. ISBN 0-262-06084-1. 
  22. ^ Roberson, D., Davies, I. & Davidoff, J. (2000). "Color categories are not universal: Replications and new evidence from a stone-age culture". Journal of Experimental Psychology. General. 129 (3): 369–398. PMID 11006906. doi:10.1037/0096-3445.129.3.369. 
  23. ^ Notman, Leslie; Paul Sowden; Emre Ozgen (2005). "The Nature of Learned Categorical Perception Effects: A Psychophysical Approach". Cognition. 95: B1–B14. PMID 15694641. doi:10.1016/j.cognition.2004.07.002. 
  24. ^ a b c d Casey, Matthew; Paul Sowden (2012). "Modeling learned categorical perception in human vision". Neural Networks. 33: 114–126. doi:10.1016/j.neunet.2012.05.001. 
  25. ^ a b c d e f g Foroni, Francesco; Myron Rothbart (2011). "Category Boundaries and Category Labels: When Does a Category Name Influence the Perceived Similarity of Category Members". Social Cognition. 29 (5): 547–576. doi:10.1521/soco.2011.29.5.547. 
  26. ^ a b c d Clapper, John (2012). "The Effects of Prior Knowledge on Incidental Category Learning". Journal of Experimental Psychology: Learning, Memory, and Cognition. 38: 1558–1577. doi:10.1037/a0028457. 
  27. ^ a b Prather, Jonathan; Stephen Nowicki; Rindy Anderson; Susan Peters; Richard Mooney (2009). "Neural Correlates of Categorical Perception in Learned Vocal Communication". Nature Neuroscience. 12 (2): 221–228. PMC 2822723Freely accessible. PMID 19136972. doi:10.1038/nn.2246. 
  28. ^ Eriksson, Jan L.; Villa, Alessandro E.P. (2006). "Learning of auditory equivalence classes for vowels by rats". Behavioural Processes. 73 (3): 348–359. PMID 16997507. doi:10.1016/j.beproc.2006.08.005. 
  29. ^ Damper, R.I.; Harnad, S. (2000). "Neural Network Modeling of Categorical Perception". Perception and Psychophysics. 62 (4): 843–867. PMID 10883589. doi:10.3758/BF03206927. 
  30. ^ Tijsseling, A.; Harnad, S. (1997). "Warping Similarity Space in Category Learning by Backprop Nets". In Ramscar, M.; Hahn, U.; Cambouropoulos, E.; Pain, H. Proceedings of SimCat 1997: Interdisciplinary Workshop on Similarity and Categorization. Department of Artificial Intelligence, Edinburgh University. pp. 263–269. 
  31. ^ Sharma, A.; Dorman, M.F. (1999). "Cortical auditory evoked potential correlates of categorical perception of voice-onset time". Journal of the Acoustical Society of America. 106 (2): 1078–1083. PMID 10462812. doi:10.1121/1.428048. 
  32. ^ Seger, Carol A.; Poldrack, Russell A.; Prabhakaran, Vivek; Zhao, Margaret; Glover, Gary H.; Gabrieli, John D. E. (2000). "Hemispheric asymmetries and individual differences in visual concept learning as measured by functional MRI". Neuropsychologia. 38 (9): 1316–1324. PMID 10865107. doi:10.1016/S0028-3932(00)00014-2. 
  33. ^ Raizada, RDS; Poldrack; RA (2007). "Selective Amplification of Stimulus Differences during Categorical Processing of Speech". Neuron. 56 (4): 726–740. PMID 18031688. doi:10.1016/j.neuron.2007.11.001. 
  34. ^ Cangelosi, A.; Harnad, S. (2001). "The Adaptive Advantage of Symbolic Theft Over Sensorimotor Toil: Grounding Language in Perceptual Categories.". Evolution of Communication. 4 (1): 117–142. doi:10.1075/eoc.4.1.07can. 
  35. ^ Cangelosi A.; Greco A.; Harnad S. (2000). "From robotic toil to symbolic theft: Grounding transfer from entry-level to higher-level categories". Connection Science. 12 (2): 143–162. doi:10.1080/09540090050129763. 
  36. ^ Pevtzow, R.; Harnad, S. (1997). "Warping Similarity Space in Category Learning by Human Subjects: The Role of Task Difficulty". In Ramscar, M.; Hahn, U.; Cambouropolos, E.; Pain, H. Proceedings of SimCat 1997: Interdisciplinary Workshop on Similarity and Categorization. Department of Artificial Intelligence, Edinburgh University. pp. 189–195. 
  37. ^ Livingston, K. Andrews; Harnad, S. (1998). "Categorical Perception Effects Induced by Category Learning". Journal of Experimental Psychology: Learning, Memory, and Cognition. 24 (3): 732–753. doi:10.1037/0278-7393.24.3.732. 
  38. ^ Pullum, G. K. (1989). "The great eskimo vocabulary hoax". Natural Language and Linguistic Theory. 7: 275–281. doi:10.1007/bf00138079. 
  39. ^ a b c Hess, Ursula; Reginald Adams; Robert Kleck (2009). "The Categorical Perception of Emotions and Traits". Social Cognition. 27 (2): 320–326. doi:10.1521/soco.2009.27.2.320. 
  40. ^ a b Cheal, Jenna; M. D. Rutherford (2012). "Mapping Emotion Category Boundaries Using a Visual Expectation Paradigm". Perception. 39: 1514–1525. doi:10.1068/p6683. 
  41. ^ a b c d Otten, Marte; Mahzarin Banaji (2012). "Social Categories Shape the Neural Representation of Emotion: Evidence From a Visual Face Adaptation Task". Frontiers in Integrative Neuroscience. 6. doi:10.3389/fnint.2012.00009. 
  42. ^ a b c d e f Sauter, Disa; Oliver LeGuen; Daniel Haun (2011). "Categorical Perception of Emotional Facial Expressions Does Not Require Lexical Categories". Emotion. 11 (6): 1479–1483. doi:10.1037/a0025336. 
  43. ^ a b Fujimura, Tomomi; Yoshi-Taka Matsuda; Kentaro Katahira; Masato Okada; Kazuo Okanoya (2012). "Categorical and Dimensional Perceptions in Decoding Emotional Facial Expressions". Cognition & Emotion. 26 (4): 587–601. doi:10.1080/02699931.2011.595391. 


  • This article is based on material from the article Categorical Perception in the Encyclopedia of Cognitive Science, used here with permission of the author, S. Harnad.
  • Burns, E. M.; Campbell, S. L. (1994). "Frequency and frequency-ratio resolution by possessors of absolute and relative pitch: Examples of categorical perception?". Journal of the Acoustical Society of America. 96 (5 Pt 1): 2704–2719. PMID 7983276. doi:10.1121/1.411447. 
  • Belpaeme, Tony (2002). "Factors influencing the origins of colour categories". Artificial Intelligence Lab, Vrije Universiteit Brussel. Archived from the original on 2006-07-21. 
  • Bimler, D; Kirkland, J. (2001). "Categorical perception of facial expressions of emotion: Evidence from multidimensional scaling.". Cognition & Emotion. 15 (5): 633–658. doi:10.1080/02699930143000077. 
  • Calder, A.J., Young, A.W., Perrett, D.I., Etcoff, N.L. & Rowland, D. (1996). "Categorical perception of morphed facial expressions". Visual Cognition. 3 (2): 81–117. doi:10.1080/713756735. 
  • Campanella, S., Quinet, O., Bruyer, R., Crommelinck, M. & Guerit, J.M. (2002). "Categorical perception of happiness and fear facial expressions : an ERP study". Journal of Cognitive Neuroscience. 14 (2): 210–227. PMID 11970787. doi:10.1162/089892902317236858. 
  • Goldstone, R. L; Lippa, Y. & Shiffrin, R. M. (2001). "Altering object representations through category learning". Cognition. 78 (1): 27–43. PMID 11062321. doi:10.1016/S0010-0277(00)00099-8. 
  • Goldstone, R. L. (1999). "Similarity". In Robert Andrew Wilson; Frank C. Keil. The MIT Encyclopedia of the Cognitive Sciences. Cambridge, MA: MIT Press. pp. 763–765. ISBN 978-0-262-73144-7. 
  • Guest, S.; Van Laar, D. (2000). "The structure of colour naming space". Vision Research. 40 (7): 723–734. PMID 10683451. doi:10.1016/S0042-6989(99)00221-7. 
  • Harnad, S. (1990). "The Symbol Grounding Problem". Physica D. 42 (1–3): 335–346. doi:10.1016/0167-2789(90)90087-6. Archived from the original on June 11, 2002. 
  • Kotsoni, E; de Haan, M; Johnson, MH. (2001). "Categorical perception of facial expressions by 7-month-old infants". Perception. 30 (9): 1115–1125. PMID 11694087. doi:10.1068/p3155. 
  • Lawrence, D. H. (1950). "Acquired distinctiveness of cues: II. Selective association in a constant stimulus situation". Journal of Experimental Psychology. 40 (2): 175–188. PMID 15415514. doi:10.1037/h0063217. 
  • Rossion, B., Schiltz, C., Robaye, L., Pirenne, D. & Crommelinck, M. (2001). "How does the brain discriminate familiar and unfamiliar faces ? A PET study of face categorical perception". Journal of Cognitive Neuroscience. 13 (7): 1019–1034. PMID 11595103. doi:10.1162/089892901753165917. 
  • Schyns, P. G.; Goldstone, R. L & Thibaut, J. (1998). "Development of features in object concepts". Behavioral and Brain Sciences. 21 (01): 1–54. doi:10.1017/S0140525X98000107. 
  • Steels, L. (2001). "Language games for autonomous robots". IEEE Intelligent Systems. 16 (5): 16–22. doi:10.1109/5254.956077. 
  • Steels, L.; Kaplan, F. (1999). "Bootstrapping Grounded Word Semantics". In Briscoe, T. Linguistic evolution through language acquisition: formal and computational models. Cambridge UK: Cambridge University Press. 
  • Whorf, B. L. (1964). Language, thought and reality. Cambridge, MA: MIT Press. ISBN 0-262-23003-8.