Uncanny valley: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m →‎Theoretical basis: Journal cites, Added 2 dois to journal cites using AWB (11211)
m Journal cites, added 1 DOI, templated 1 journal cites using AWB (11211)
Line 28: Line 28:
* '''Religious definition of human identity.''' The existence of artificial but humanlike entities is viewed by some as a threat to the concept of human identity.<ref>MacDorman, K. F., Vasudevan, S. K., & Ho, C.-C., 2009.</ref> An example can be found in the theoretical framework of psychiatrist [[Irvin Yalom]]. Yalom explains that humans construct psychological defenses in order to avoid existential anxiety stemming from death. One of these defenses is "specialness", the irrational belief that aging and death as central premises of life apply to all others but oneself.<ref>Yalom, Irvin D. (1980) "Existential Psychotherapy", Basic Books, Inc., Publishers, New York</ref> The experience of the very humanlike "living" robot can be so rich and compelling that it challenges humans' notions of "specialness" and existential defenses, eliciting existential anxiety. The creation of human-like, but soulless, beings is considered unwise; the [[golem]] in Judaism is a well-known example. Like anthropomorphic robots, a golem may be created with good intentions, but its absence of human empathy and spirit can lead to disaster.{{citation needed|date=September 2013}}
* '''Religious definition of human identity.''' The existence of artificial but humanlike entities is viewed by some as a threat to the concept of human identity.<ref>MacDorman, K. F., Vasudevan, S. K., & Ho, C.-C., 2009.</ref> An example can be found in the theoretical framework of psychiatrist [[Irvin Yalom]]. Yalom explains that humans construct psychological defenses in order to avoid existential anxiety stemming from death. One of these defenses is "specialness", the irrational belief that aging and death as central premises of life apply to all others but oneself.<ref>Yalom, Irvin D. (1980) "Existential Psychotherapy", Basic Books, Inc., Publishers, New York</ref> The experience of the very humanlike "living" robot can be so rich and compelling that it challenges humans' notions of "specialness" and existential defenses, eliciting existential anxiety. The creation of human-like, but soulless, beings is considered unwise; the [[golem]] in Judaism is a well-known example. Like anthropomorphic robots, a golem may be created with good intentions, but its absence of human empathy and spirit can lead to disaster.{{citation needed|date=September 2013}}


* '''Conflicting perceptual cues.''' The negative affect associated with uncanny stimuli is produced by the activation of conflicting cognitive representations. Perceptual tension occurs when an individual perceives conflicting cues to [[Categorization|category]] membership, such as when a humanoid figure moves like a robot, or has other visible robot features. This cognitive conflict is experienced as psychological discomfort (i.e., "eeriness"), much like the discomfort that is experienced with [[cognitive dissonance]].<ref name="Ferrey2015">{{cite journal | last1 = Ferrey | first1 = A. E. | last2 = Burleigh | first2 = T. J. | last3 = Fenske | first3 = M. J. | year = 2015 | title = Stimulus-category competition, inhibition, and affective devaluation: a novel account of the uncanny valley | url = http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00249/abstract | journal = Frontiers in Psychology | volume = 6 | issue = | page = 249 | doi = 10.3389/fpsyg.2015.00249 }}</ref><ref>{{cite journal | last1 = Elliot | first1 = A. J. | last2 = Devine | first2 = P. G. | year = 1994 | title = On the motivational nature of cognitive dissonance: Dissonance as psychological discomfort | url = | journal = Journal of personality and social psychology | volume = 67 | issue = 3| page = 382 | doi=10.1037/0022-3514.67.3.382}}</ref> Several studies support this possibility. Burleigh and colleagues demonstrated that faces at the midpoint between human and non-human stimuli produced a level of reported eeriness that diverged from an otherwise linear model relating human-likeness to affect.<ref name="Burleigh2013">{{cite journal | last1 = Burleigh | first1 = T. J. | last2 = Schoenherr | first2 = J. R. | last3 = Lacroix | first3 = G. L. | year = 2013 | title = Does the uncanny valley exist? An empirical test of the relationship between eeriness and the human likeness of digitally created faces | url = https://tylerburleigh.com/pubs/Burleigh,%20Schoenherr%20and%20Lacroix%20-%202013%20-%20Does%20the%20uncanny%20valley%20exist.pdf | format = PDF | journal = Computers in Human Behavior | volume = 29 | issue = | page = 3 | doi = 10.1016/j.chb.2012.11.021 }}</ref> Yamada et al. found that cognitive difficulty was associated with negative affect at the mid-point of a morphed continuum (e.g., a series of stimuli morphing between a cartoon dog and a real dog).<ref name="Yamada2013">{{cite journal | last1 = Yamada | first1 = Y. | last2 = Kawabe | first2 = T. | last3 = Ihaya | first3 = K. | year = 2013 | title = Categorization difficulty is associated with negative evaluation in the "uncanny valley" phenomenon | url = http://onlinelibrary.wiley.com/doi/10.1111/j.1468-5884.2012.00538.x/abstract | journal = Japanese Psychological Research | volume = 55 | issue = 1| pages = 20–32 | doi=10.1111/j.1468-5884.2012.00538.x}}</ref> Ferrey et al. demonstrated that the mid-point between images on a continuum anchored by two stimulus categories produced a maximum of negative affect, and found this with both human and non-human entities.<ref name="Ferrey2015"/> Schoenherr and Burleigh provide examples from history and culture that evidence an aversion to hybrid entities, such as the aversion to [[genetically modified organism]]s ("Frankenfoods") and [[transgender]] individuals.<ref>Schoenherr, J. R., & Burleigh, T. J. (2014). [http://journal.frontiersin.org/article/10.3389/fpsyg.2014.01456/full Uncanny sociocultural categories]" ''Frontiers in Psychology'' 5:1456, doi 10.3389/fpsyg.2014.01456.</ref> Finally, Moore developed a Bayesian mathematical model that provides a quantitative account of perceptual conflict.<ref>Moore, R. K. (2012). A Bayesian explanation of the ‘Uncanny Valley’ effect and related psychological phenomena. ''Nature Scientific Reports'', 2, {{DOI|10.1038/srep00864}}</ref> There has been some debate as to the precise mechanisms that are responsible. It has been argued that the effect is driven by categorization difficulty,<ref name="Burleigh2013"/><ref name="Yamada2013"/> perceptual mismatch,<ref>Kätsyri, J. & Förger, K. & Mäkäräinen, M. & Takala, T. 2015. A review of empirical evidence on different uncanny valley hypotheses: support for perceptual mismatch as one road to the valley of eeriness]. ''Frontiers in Psychology'' doi 10.3389/fpsyg.2015.00390</ref> frequency-based sensitization,<ref>{{cite journal | last1 = Burleigh | first1 = T. J. | last2 = Schoenherr | first2 = J. R. | year = 2015 | title = A reappraisal of the uncanny valley: categorical perception or frequency-based sensitization? | url = http://journal.frontiersin.org/article/10.3389/fpsyg.2014.01488/abstract | journal = Frontiers in Psychology | volume = 5 | issue = | page = 1488 | doi = 10.3389/fpsyg.2014.01488 }}</ref> and inhibitory devaluation.<ref name="Ferrey2015"/>
* '''Conflicting perceptual cues.''' The negative affect associated with uncanny stimuli is produced by the activation of conflicting cognitive representations. Perceptual tension occurs when an individual perceives conflicting cues to [[Categorization|category]] membership, such as when a humanoid figure moves like a robot, or has other visible robot features. This cognitive conflict is experienced as psychological discomfort (i.e., "eeriness"), much like the discomfort that is experienced with [[cognitive dissonance]].<ref name="Ferrey2015">{{cite journal | last1 = Ferrey | first1 = A. E. | last2 = Burleigh | first2 = T. J. | last3 = Fenske | first3 = M. J. | year = 2015 | title = Stimulus-category competition, inhibition, and affective devaluation: a novel account of the uncanny valley | url = http://journal.frontiersin.org/article/10.3389/fpsyg.2015.00249/abstract | journal = Frontiers in Psychology | volume = 6 | issue = | page = 249 | doi = 10.3389/fpsyg.2015.00249 }}</ref><ref>{{cite journal | last1 = Elliot | first1 = A. J. | last2 = Devine | first2 = P. G. | year = 1994 | title = On the motivational nature of cognitive dissonance: Dissonance as psychological discomfort | url = | journal = Journal of personality and social psychology | volume = 67 | issue = 3| page = 382 }}</ref> Several studies support this possibility. Burleigh and colleagues demonstrated that faces at the midpoint between human and non-human stimuli produced a level of reported eeriness that diverged from an otherwise linear model relating human-likeness to affect.<ref name="Burleigh2013">{{cite journal | last1 = Burleigh | first1 = T. J. | last2 = Schoenherr | first2 = J. R. | last3 = Lacroix | first3 = G. L. | year = 2013 | title = Does the uncanny valley exist? An empirical test of the relationship between eeriness and the human likeness of digitally created faces | url = https://tylerburleigh.com/pubs/Burleigh,%20Schoenherr%20and%20Lacroix%20-%202013%20-%20Does%20the%20uncanny%20valley%20exist.pdf | format = PDF | journal = Computers in Human Behavior | volume = 29 | issue = | page = 3 | doi = 10.1016/j.chb.2012.11.021 }}</ref> Yamada et al. found that cognitive difficulty was associated with negative affect at the mid-point of a morphed continuum (e.g., a series of stimuli morphing between a cartoon dog and a real dog).<ref name="Yamada2013">{{cite journal | last1 = Yamada | first1 = Y. | last2 = Kawabe | first2 = T. | last3 = Ihaya | first3 = K. | year = 2013 | title = Categorization difficulty is associated with negative evaluation in the "uncanny valley" phenomenon | url = http://onlinelibrary.wiley.com/doi/10.1111/j.1468-5884.2012.00538.x/abstract | journal = Japanese Psychological Research | volume = 55 | issue = 1| pages = 20–32 | doi=10.1111/j.1468-5884.2012.00538.x}}</ref> Ferrey et al. demonstrated that the mid-point between images on a continuum anchored by two stimulus categories produced a maximum of negative affect, and found this with both human and non-human entities.<ref name="Ferrey2015"/> Schoenherr and Burleigh provide examples from history and culture that evidence an aversion to hybrid entities, such as the aversion to [[genetically modified organism]]s ("Frankenfoods") and [[transgender]] individuals.<ref>{{cite journal | last1 = Schoenherr | first1 = J. R. | last2 = Burleigh | first2 = T. J. | year = 2014 | title = Uncanny sociocultural categories | url = http://journal.frontiersin.org/article/10.3389/fpsyg.2014.01456/full | journal = Frontiers in Psychology | volume = 5 | issue = | page = 1456 | doi = 10.3389/fpsyg.2014.01456 }}</ref> Finally, Moore developed a Bayesian mathematical model that provides a quantitative account of perceptual conflict.<ref>Moore, R. K. (2012). A Bayesian explanation of the ‘Uncanny Valley’ effect and related psychological phenomena. ''Nature Scientific Reports'', 2, {{DOI|10.1038/srep00864}}</ref> There has been some debate as to the precise mechanisms that are responsible. It has been argued that the effect is driven by categorization difficulty,<ref name="Burleigh2013"/><ref name="Yamada2013"/> perceptual mismatch,<ref>Kätsyri, J. & Förger, K. & Mäkäräinen, M. & Takala, T. 2015. A review of empirical evidence on different uncanny valley hypotheses: support for perceptual mismatch as one road to the valley of eeriness]. ''Frontiers in Psychology'' {{DOI|10.3389/fpsyg.2015.00390}}</ref> frequency-based sensitization,<ref>{{cite journal | last1 = Burleigh | first1 = T. J. | last2 = Schoenherr | first2 = J. R. | year = 2015 | title = A reappraisal of the uncanny valley: categorical perception or frequency-based sensitization? | url = http://journal.frontiersin.org/article/10.3389/fpsyg.2014.01488/abstract | journal = Frontiers in Psychology | volume = 5 | issue = | page = 1488 | doi = 10.3389/fpsyg.2014.01488 }}</ref> and inhibitory devaluation.<ref name="Ferrey2015"/>


==Research==
==Research==

Revision as of 12:16, 19 June 2015

In an experiment involving the human lookalike robot Repliee Q2 (pictured above), the uncovered robotic structure underneath Repliee, and the actual human who was the model for Repliee, the human lookalike triggered the highest level of mirror neuron activity.[1]

The uncanny valley is a hypothesis in the field of aesthetics which holds that when features look and move almost, but not exactly, like natural beings, it causes a response of revulsion among some observers. The "valley" refers to the dip in a graph of the comfort level of beings as subjects move toward a healthy, natural likeness described in a function of a subject's aesthetic acceptability. Examples can be found in the fields of robotics[2] and 3D computer animation,[3][4] among others.

Etymology

The concept was identified by the robotics professor Masahiro Mori as Bukimi no Tani Genshō (不気味の谷現象) in 1970.[5][6] The term "uncanny valley" first appeared in the 1978 book Robots: Fact, Fiction, and Prediction, written by Jasia Reichardt.[7] The hypothesis has been linked to Ernst Jentsch's concept of the "uncanny" identified in a 1906 essay "On the Psychology of the Uncanny".[8][9][10] Jentsch's conception was elaborated by Sigmund Freud in a 1919 essay entitled "The Uncanny" ("Das Unheimliche").[11]

Hypothesis

Mori's original hypothesis states that as the appearance of a robot is made more human, some observers' emotional response to the robot will become increasingly positive and empathic, until a point is reached beyond which the response quickly becomes that of strong revulsion. However, as the robot's appearance continues to become less distinguishable from that of a being, the emotional response becomes positive once again and approaches human-to-human empathy levels.[12]

This area of repulsive response aroused by a robot with appearance and motion between a "barely human" and "fully human" entity is called the uncanny valley. The name captures the idea that an almost human-looking robot will seem overly "strange" to some human beings, will produce a feeling of uncanniness, and will thus fail to evoke the empathic response required for productive human-robot interaction.[12]

Theoretical basis

Hypothesized emotional response of subjects is plotted against anthropomorphism of a robot, following Mori's statements. The uncanny valley is the region of negative emotional response towards robots that seem "almost". Movement amplifies the emotional response.[13]

A number of theories have been proposed to explain the cognitive mechanism underlying the phenomenon:

  • Mortality salience. Viewing an "uncanny" robot elicits an innate fear of death and culturally-supported defenses for coping with death’s inevitability.... [P]artially disassembled androids...play on subconscious fears of reduction, replacement, and annihilation: (1) A mechanism with a human facade and a mechanical interior plays on our subconscious fear that we are all just soulless machines. (2) Androids in various states of mutilation, decapitation, or disassembly are reminiscent of a battlefield after a conflict and, as such, serve as a reminder of our mortality. (3) Since most androids are copies of actual people, they are doppelgängers and may elicit a fear of being replaced, on the job, in a relationship, and so on. (4) The jerkiness of an android’s movements could be unsettling because it elicits a fear of losing bodily control."[16]
  • Pathogen avoidance. Uncanny stimuli may activate a cognitive mechanism that originally evolved to motivate the avoidance of potential sources of pathogens by eliciting a disgust response. "The more human an organism looks, the stronger the aversion to its defects, because (1) defects indicate disease, (2) more human-looking organisms are more closely related to human beings genetically, and (3) the probability of contracting disease-causing bacteria, viruses, and other parasites increases with genetic similarity."[15][17] Thus, the visual anomalies of android robots and animated human characters have the same effect as those of corpses and visibly diseased individuals: the elicitation of alarm and revulsion.[citation needed]
  • Sorites paradoxes. Stimuli with human and nonhuman traits undermine our sense of human identity by linking qualitatively different categories, human and nonhuman, by a quantitative metric, degree of human likeness.[18]
  • Violation of human norms. The uncanny valley may "be symptomatic of entities that elicit a model of a human other but do not measure up to it".[19] If an entity looks sufficiently nonhuman, its human characteristics will be noticeable, generating empathy. However, if the entity looks almost human, it will elicit our model of a human other and its detailed normative expectations. The nonhuman characteristics will be noticeable, giving the human viewer a sense of strangeness. In other words, a robot stuck inside the uncanny valley is no longer being judged by the standards of a robot doing a passable job at pretending to be human, but is instead being judged by the standards of a human doing a terrible job at acting like a normal person. This has been linked to perceptual uncertainty and the theory of predictive coding.[20][21]
  • Religious definition of human identity. The existence of artificial but humanlike entities is viewed by some as a threat to the concept of human identity.[22] An example can be found in the theoretical framework of psychiatrist Irvin Yalom. Yalom explains that humans construct psychological defenses in order to avoid existential anxiety stemming from death. One of these defenses is "specialness", the irrational belief that aging and death as central premises of life apply to all others but oneself.[23] The experience of the very humanlike "living" robot can be so rich and compelling that it challenges humans' notions of "specialness" and existential defenses, eliciting existential anxiety. The creation of human-like, but soulless, beings is considered unwise; the golem in Judaism is a well-known example. Like anthropomorphic robots, a golem may be created with good intentions, but its absence of human empathy and spirit can lead to disaster.[citation needed]
  • Conflicting perceptual cues. The negative affect associated with uncanny stimuli is produced by the activation of conflicting cognitive representations. Perceptual tension occurs when an individual perceives conflicting cues to category membership, such as when a humanoid figure moves like a robot, or has other visible robot features. This cognitive conflict is experienced as psychological discomfort (i.e., "eeriness"), much like the discomfort that is experienced with cognitive dissonance.[24][25] Several studies support this possibility. Burleigh and colleagues demonstrated that faces at the midpoint between human and non-human stimuli produced a level of reported eeriness that diverged from an otherwise linear model relating human-likeness to affect.[26] Yamada et al. found that cognitive difficulty was associated with negative affect at the mid-point of a morphed continuum (e.g., a series of stimuli morphing between a cartoon dog and a real dog).[27] Ferrey et al. demonstrated that the mid-point between images on a continuum anchored by two stimulus categories produced a maximum of negative affect, and found this with both human and non-human entities.[24] Schoenherr and Burleigh provide examples from history and culture that evidence an aversion to hybrid entities, such as the aversion to genetically modified organisms ("Frankenfoods") and transgender individuals.[28] Finally, Moore developed a Bayesian mathematical model that provides a quantitative account of perceptual conflict.[29] There has been some debate as to the precise mechanisms that are responsible. It has been argued that the effect is driven by categorization difficulty,[26][27] perceptual mismatch,[30] frequency-based sensitization,[31] and inhibitory devaluation.[24]

Research

One study conducted in 2009 examined the evolutionary mechanism behind the aversion associated with the uncanny valley. A group of five monkeys were shown three images: two different 3D monkey faces (realistic, unrealistic), and a real photo of a monkey's face. The monkeys' eye-gaze was used as a proxy for preference or aversion. Since the realistic 3D monkey face was looked at less than either the real photo, or the unrealistic 3D monkey face, this was interpreted as an indication that the monkey participants found the realistic 3D face aversive, or otherwise preferred the other two images. As one would expect with the uncanny valley, more realism can lead to less positive reactions, and this study demonstrated that neither human-specific cognitive processes, nor human culture explain the uncanny valley. In other words, this aversive reaction to realism can be said to be evolutionary in origin.[32]

As of 2011, researchers at University of California, San Diego and California Institute for Telecommunications and Information Technology are measuring human brain activations related to the uncanny valley.[33][34] In one study using fMRI, a group of cognitive scientists and roboticists found the biggest differences in brain responses for uncanny robots in parietal cortex, on both sides of the brain, specifically in the areas that connect the part of the brain’s visual cortex that processes bodily movements with the section of the motor cortex thought to contain mirror neurons. The researchers say they saw, in essence, evidence of mismatch or perceptual conflict.[20] The brain "lit up" when the human-like appearance of the android and its robotic motion "didn’t compute". Ayşe Pınar Saygın, an assistant professor from UCSD, says "The brain doesn’t seem selectively tuned to either biological appearance or biological motion per se. What it seems to be doing is looking for its expectations to be met – for appearance and motion to be congruent."[35][36][37]

Viewer perception of facial expression and speech and the uncanny valley in realistic, human-like characters intended for video games and film is being investigated by Tinwell et al., 2011.[38] Consideration is also given by Tinwell et al. (2010) as to how the uncanny may be exaggerated for antipathetic characters in survival horror games.[39] Building on the body of work already undertaken in android science, this research intends to build a conceptual framework of the uncanny valley using 3D characters generated in a real-time gaming engine. The goal is to analyze how cross-modal factors of facial expression and speech can exaggerate the uncanny. Tinwell et al., 2011[40] have also introduced the notion of an unscalable uncanny wall that suggests that a viewer’s discernment for detecting imperfections in realism will keep pace with new technologies in simulating realism. A summary of Dr Angela Tinwell's research on the Uncanny Valley, psychological reasons behind the Uncanny Valley and how designers may overcome the uncanny in human-like virtual characters is provided in her book, The Uncanny Valley in Games and Animation by CRC Press.[41]

In computer animation

A number of films that use computer-generated imagery to show characters have been described by reviewers as giving a feeling of revulsion or "creepiness" as a result of the characters looking too realistic.  Examples include:

  • According to roboticist Dario Floreano, the animated baby in Pixar's groundbreaking 1988 short film Tin Toy provoked negative audience reactions, which first led the film industry to take the concept of the uncanny valley seriously.[42][43]
  • Several reviewers of the 2004 animated film The Polar Express called its animation eerie.  CNN.com reviewer Paul Clinton wrote, "Those human characters in the film come across as downright... well, creepy.  So The Polar Express is at best disconcerting, and at worst, a wee bit horrifying." [44]  The term "eerie" was used by reviewers Kurt Loder[45] and Manohla Dargis,[46] among others. Newsday reviewer John Anderson called the film's characters "creepy" and "dead-eyed", and wrote that "The Polar Express is a zombie train." [47]  Animation director Ward Jenkins wrote an online analysis describing how changes to the Polar Express characters' appearance, especially to their eyes and eyebrows, could have avoided what he considered a feeling of deadness in their faces.[48]
  • In a review of the 2007 animated film Beowulf, New York Times technology writer David Gallagher wrote that the film failed the uncanny valley test, stating that the film's villain, the monster Grendel, was "only slightly scarier" than the "closeups of our hero Beowulf’s face... allowing viewers to admire every hair in his 3-D digital stubble." [4]
  • In the 2010 film The Last Airbender, the character Appa, the flying bison, has been called "uncanny".  Geekosystem's Susana Polo found the character "really quite creepy", noting "that prey animals (like bison) have eyes on the sides of their heads, and so moving them to the front without changing rest of the facial structure tips us right into the uncanny valley".[49]

By contrast, at least one film, the 2011 The Adventures of Tintin: The Secret of the Unicorn, was praised by reviewers for avoiding the uncanny valley despite its animated characters' realism.  Critic Dana Stevens wrote, "With the possible exception of the title character, the animated cast of Tintin narrowly escapes entrapment in the so-called 'uncanny valley.'" [50] Wired Magazine editor Kevin Kelly wrote of the film, "we have passed beyond the uncanny valley into the plains of hyperreality." [51]

Design principles

A number of design principles have been proposed for avoiding the uncanny valley:

  • Design elements should match in human realism. A robot may look uncanny when human and nonhuman elements are mixed.[52] For example, both a robot with a synthetic voice or a human being with a human voice have been found to be less eerie than a robot with a human voice or a human being with a synthetic voice.[9] For a robot to give a more positive impression, its degree of human realism in appearance should also match its degree of human realism in behavior.[53] If an animated character looks more human than its movement, this gives a negative impression.[54] Human neuroimaging studies also indicate matching appearance and motion kinematics are important.[20][55][56]
  • Reducing conflict and uncertainty by matching appearance, behavior, and ability. In terms of performance, if a robot looks too appliance-like, people will expect little from it; if it looks too human, people will expect too much from it.[53] A highly human-like appearance leads to an expectation that certain behaviors will be present, such as humanlike motion dynamics. This likely operates at a sub-conscious level and may have a biological basis. Neuroscientists have noted "when the brain's expectations are not met, the brain...generates a 'prediction error'. As human-like artificial agents become more commonplace, perhaps our perceptual systems will be re-tuned to accommodate these new social partners. Or perhaps, we will decide "it is not a good idea to make [robots] so clearly in our image after all."[20][56][57]
  • Human facial proportions and photorealistic texture should only be used together. A photorealistic human texture demands human facial proportions, or the computer generated character can fall into the uncanny valley. Abnormal facial proportions, including those typically used by artists to enhance attractiveness (e.g., larger eyes), can look eerie with a photorealistic human texture. Avoiding a photorealistic texture can permit more leeway.[58]

Criticism

A number of criticisms have been raised concerning whether the uncanny valley exists as a unified phenomenon amenable to scientific scrutiny:

  • Good design can lift human-looking entities out of the valley. David Hanson has criticized Mori's hypothesis that entities approaching human appearance will necessarily be evaluated negatively.[59] He has shown that the uncanny valley that Karl MacDorman and Hiroshi Ishiguro[60] generated – by having participants rate photographs that morphed from humanoid robots to android robots to human beings – could be flattened out by adding neotenous, cartoonish features to the entities that had formerly fallen into the valley.[59]
  • The uncanny appears at any degree of human likeness. Hanson has also pointed out that uncanny entities may appear anywhere in a spectrum ranging from the abstract (e.g., MIT's robot Lazlo) to the perfectly human (e.g., cosmetically atypical people).[59] Capgras syndrome is a relatively rare condition in which the sufferer believes that people (or, in some cases, things) have been replaced with duplicates. These duplicates are rationally accepted to be identical in physical properties, but the irrational belief is held that the "true" entity has been replaced with something else. Some sufferers of Capgras syndrome claim that the duplicate is a robot. Ellis and Lewis argue that the syndrome arises from an intact system for overt recognition coupled with a damaged system for covert recognition, which leads to conflict over an individual being identifiable but not familiar in any emotional sense.[61] This supports the view that the uncanny valley could arise due to issues of categorical perception that are particular to the manner in which the brain processes information.[56][62]
  • The uncanny valley is a heterogeneous group of phenomena. Phenomena labeled as being in the uncanny valley can be diverse, involve different sense modalities, and have multiple, possibly overlapping causes, which can range from evolved or learned circuits for early face perception[58][63] to culturally-shared psychological constructs.[64] People's cultural backgrounds may have a considerable influence on how androids are perceived with respect to the uncanny valley.[65]
  • The uncanny valley may be generational. Younger generations, more used to CGI, robots, and such, may be less likely to be affected by this hypothesized issue.[66]

Similar effects

An effect similar to the uncanny valley was noted by Charles Darwin in 1839:

The expression of this [Trigonocephalus] snake’s face was hideous and fierce; the pupil consisted of a vertical slit in a mottled and coppery iris; the jaws were broad at the base, and the nose terminated in a triangular projection. I do not think I ever saw anything more ugly, excepting, perhaps, some of the vampire bats. I imagine this repulsive aspect originates from the features being placed in positions, with respect to each other, somewhat proportional to the human face; and thus we obtain a scale of hideousness.

— Charles Darwin, The Voyage of the Beagle[67]

A similar "uncanny valley" effect could, according to the ethical-futurist writer Jamais Cascio, show up when humans begin modifying themselves with transhuman enhancements (cf. body modification), which aim to improve the abilities of the human body beyond what would normally be possible, be it eyesight, muscle strength, or cognition.[68] So long as these enhancements remain within a perceived norm of human behavior, a negative reaction is unlikely, but once individuals supplant normal human variety, revulsion can be expected. However, according to this theory, once such technologies gain further distance from human norms, "transhuman" individuals would cease to be judged on human levels and instead be regarded as separate entities altogether (this point is what has been dubbed "posthuman"), and it is here that acceptance would rise once again out of the uncanny valley.[68] Another example comes from "pageant retouching" photos, especially of children, which some find disturbingly doll-like.[69]

Use in the media

In the 2008 30 Rock episode "Succession", Frank Rossitano explains the uncanny valley concept, using a graph and Star Wars examples, to try to convince Tracy Jordan that his dream of creating a pornographic video game is impossible. He also references the computer-animated film The Polar Express.[70]

The 1977 Doctor Who serial "The Robots of Death" describes a mental illness called "Grimwade's Syndrome" or "robophobia": a condition where the lack of body language from humanoid robots provokes in certain people the feeling that they are "surrounded by walking, talking dead men."

See also

Notes

  1. ^ Tinwell, Angela (2014-12-04). The Uncanny Valley in Games and Animation. CRC Press. pp. 165–. ISBN 9781466586956. Retrieved 13 January 2015.
  2. ^ "The Truth About Robotic's Uncanny Valley - Human-Like Robots and the Uncanny Valley". Popular Mechanics. 2010-01-20. Retrieved 2011-03-20.
  3. ^ When fantasy is just too close for comfort - The Age, June 10, 2007
  4. ^ a b Digital Actors in ‘Beowulf’ Are Just Uncanny - New York Times, November 14, 2007
  5. ^ Kawaguchi, Judit (10 March 2011). "Robocon founder Dr. Masahiro Mori". Words To Live By. Japan Times. p. 11. Archived from the original on 2011-03-13. Retrieved 2014-08-14. Mori's influence on the world of robotics is immeasurable. His classic hypothesis, "The Uncanny Valley," published in 1970, is still a key work defining robotic design.
  6. ^ "The Uncanny Valley". IEEE Spectrum. 12 June 2012. Retrieved 1 April 2015.
  7. ^ "An Uncanny Mind: Masahiro Mori on the Uncanny Valley and Beyond". IEEE Spectrum. 12 June 2012. Retrieved 1 April 2015.
  8. ^ Jentsch, E. (25 Aug. 1906). Zur Psychologie des Unheimlichen, Psychiatrisch-Neurologische Wochenschrift 8(22), 195-198.
  9. ^ a b Mitchell et al., 2011.
  10. ^ Misselhorn, 2009
  11. ^ Freud, S. (1919/2003). The uncanny [das unheimliche] (D. McLintock, Trans.). New York: Penguin.
  12. ^ a b Mori, M. (1970/2012). The uncanny valley (K. F. MacDorman & N. Kageki, Trans.). IEEE Robotics & Automation Magazine, 19(2), 98–100. doi:10.1109/MRA.2012.2192811
  13. ^ MacDorman, 2005.
  14. ^ Green, MacDorman, Ho, Koch, 2008.
  15. ^ a b Rhodes, G. & Zebrowitz, L. A. (eds) (2002). Facial Attractiveness: Evolutionary, Cognitive, and Social Perspectives, Ablex Publishing.
  16. ^ MacDorman & Ishiguro, 2006, p. 313.
  17. ^ MacDorman, Green, Ho, & Koch, 2009, p. 696.
  18. ^ Ramey, 2005.
  19. ^ MacDorman & Ishiguro, 2006, p. 303.
  20. ^ a b c d Saygin, A.P. (2011). "The Thing That Should Not Be: Predictive Coding and the Uncanny Valley in Perceiving Human and Humanoid Robot Actions". Social Cognitive Affective Neuroscience. 7: 413–22. doi:10.1093/scan/nsr025.
  21. ^ UCSD News. "Your Brain on Androids".
  22. ^ MacDorman, K. F., Vasudevan, S. K., & Ho, C.-C., 2009.
  23. ^ Yalom, Irvin D. (1980) "Existential Psychotherapy", Basic Books, Inc., Publishers, New York
  24. ^ a b c Ferrey, A. E.; Burleigh, T. J.; Fenske, M. J. (2015). "Stimulus-category competition, inhibition, and affective devaluation: a novel account of the uncanny valley". Frontiers in Psychology. 6: 249. doi:10.3389/fpsyg.2015.00249.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  25. ^ Elliot, A. J.; Devine, P. G. (1994). "On the motivational nature of cognitive dissonance: Dissonance as psychological discomfort". Journal of personality and social psychology. 67 (3): 382.
  26. ^ a b Burleigh, T. J.; Schoenherr, J. R.; Lacroix, G. L. (2013). "Does the uncanny valley exist? An empirical test of the relationship between eeriness and the human likeness of digitally created faces" (PDF). Computers in Human Behavior. 29: 3. doi:10.1016/j.chb.2012.11.021.
  27. ^ a b Yamada, Y.; Kawabe, T.; Ihaya, K. (2013). "Categorization difficulty is associated with negative evaluation in the "uncanny valley" phenomenon". Japanese Psychological Research. 55 (1): 20–32. doi:10.1111/j.1468-5884.2012.00538.x.
  28. ^ Schoenherr, J. R.; Burleigh, T. J. (2014). "Uncanny sociocultural categories". Frontiers in Psychology. 5: 1456. doi:10.3389/fpsyg.2014.01456.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  29. ^ Moore, R. K. (2012). A Bayesian explanation of the ‘Uncanny Valley’ effect and related psychological phenomena. Nature Scientific Reports, 2, doi:10.1038/srep00864
  30. ^ Kätsyri, J. & Förger, K. & Mäkäräinen, M. & Takala, T. 2015. A review of empirical evidence on different uncanny valley hypotheses: support for perceptual mismatch as one road to the valley of eeriness]. Frontiers in Psychology doi:10.3389/fpsyg.2015.00390
  31. ^ Burleigh, T. J.; Schoenherr, J. R. (2015). "A reappraisal of the uncanny valley: categorical perception or frequency-based sensitization?". Frontiers in Psychology. 5: 1488. doi:10.3389/fpsyg.2014.01488.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  32. ^ Kitta MacPherson (2009-10-13). "Monkey visual behavior falls into the uncanny valley". Princeton University. Retrieved 2011-03-20.
  33. ^ "Science Exploring the uncanny valley of how brains react to humanoids".
  34. ^ Ramsey, Doug (2010-05-13). "Nineteen Projects Awarded Inaugural Calit2 Strategic Research Opportunities Grants". UCSD. Retrieved 2011-03-20.
  35. ^ Kiderra, Inga. "YOUR BRAIN ON ANDROIDS". UCSD.
  36. ^ Robbins, Gary. "UCSD exploring why robots creep people out". San Diego Union Tribune.
  37. ^ Palmer, Chris. "Exploring "The thing that should not be"". Calit2.
  38. ^ Tinwell, A.; et al. (2011). "Facial expression of emotion and perception of the Uncanny Valley in virtual characters". Computers in Human Behavior. 27: 741–749. doi:10.1016/j.chb.2010.10.018. {{cite journal}}: Explicit use of et al. in: |first= (help)
  39. ^ Tinwell, A.; et al. (2010). "Uncanny Behaviour in Survival Horror Games". Journal of Gaming and Virtual Worlds. {{cite journal}}: Explicit use of et al. in: |first= (help)
  40. ^ Tinwell, A.; et al. (2011). "The Uncanny Wall". International Journal of Arts and Technology. {{cite journal}}: Explicit use of et al. in: |first= (help)
  41. ^ Tinwell, Angela (2014). The Uncanny Valley in Games and Animation.
  42. ^ Dario Floreano. "Bio-Mimetic Robotics".
  43. ^ EPFL. http://moodle.epfl.ch/mod/resource/view.php?inpopup=true&id=41121
  44. ^ "Polar Express a creepy ride". CNN.com. Nov 10, 2004. Retrieved Nov 21, 2011.
  45. ^ Loder, Kurt (November 10, 2004). "'The Polar Express' Is All Too Human". MTV.
  46. ^ Dargis, Manohla (November 10, 2004). "Do You Hear Sleigh Bells? Nah, Just Tom Hanks and Some Train". The New York Times.
  47. ^ Anderson, John (November 10, 2004). "'Polar Express' derails in zombie land". Newsday.
  48. ^ The Polar Express: A Virtual Train Wreck (conclusion), Ward Jenkins, Ward-O-Matic blog, December 18, 2004
  49. ^ Polo, Susana (June 20, 2010). "New Airbender TV Spot: Appa's Creepy Face". Geekosystem. Retrieved December 11, 2012.
  50. ^ Stevens, Dana. "Tintin, So So". Slate. Retrieved 25 March 2012.
  51. ^ Kelly, Kevin. "Beyond the Uncanny Valley". The Technium. Retrieved 25 March 2012.
  52. ^ Ho, MacDorman, Pramono, 2008.
  53. ^ a b Goetz, Kiesler, & Powers, 2003.
  54. ^ Vinayagamoorthy, Steed, & Slater, 2005.
  55. ^ Saygin, A.P., Chaminade, T., Ishiguro, H. (2010) The Perception of Humans and Robots: Uncanny Hills in Parietal Cortex. Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp. 2716-2720).
  56. ^ a b c Saygin et al., 2011.
  57. ^ Gaylord, Chris. "Uncanny Valley: Will we ever learn to live with artificial humans?". Christian Science Monitor.
  58. ^ a b MacDorman, Green, Ho, & Koch, 2009.
  59. ^ a b c David Hanson, Andrew Olney, Ismar A. Pereira & Marge Zielke (2005). Upending the Uncanny Valley. PROCEEDINGS OF THE NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, 20, p. 1728-1729.
  60. ^ MacDorman & Ishiguro, 2006, p. 305.
  61. ^ Ellis, H., & Lewis, M. (2001). Capgras delusion: A window on face recognition. Trends in Cognitive Science, 5(4), 149-156.
  62. ^ Pollick, F. In Search of the Uncanny Valley. Analog communication: Evolution, brain mechanisms, dynamics, simulation. Cambridge, MA: MIT Press: The Vienna Series in Theoretical Biology (2009)
  63. ^ MacDorman & Ishiguro, 2006
  64. ^ MacDorman, Vasudevan & Ho, 2008.
  65. ^ Bartneck Kanda, Ishiguro, & Hagita, 2007.
  66. ^ 9/03/13 7:30am Today 7:30am. "Is the "uncanny valley" a myth?". Io9.com. Retrieved 2013-09-04.{{cite web}}: CS1 maint: numeric names: authors list (link)
  67. ^ Charles Darwin. The Voyage of the Beagle . New York: Modern Library. 2001. p. 87.
  68. ^ a b Jamais Cascio, The Second Uncanny Valley
  69. ^ viz. "Pageant retouching". University of Texas. Retrieved 2011-03-20.
  70. ^ Michael Neal (April 25, 2008). "Succession". Yahoo! TV.

References

Bartneck, C., Kanda, T., Ishiguro, H., & Hagita, N. (2007). Is the Uncanny Valley an Uncanny Cliff? Proceedings of the 16th IEEE, RO-MAN 2007, Jeju, Korea, pp. 368–373. doi:10.1109/ROMAN.2007.4415111html
Burleigh, T. J. & Schoenherr (2015). A reappraisal of the uncanny valley: categorical perception or frequency-based sensitization? Frontiers in Psychology, 5:1488, doi:10.3389/fpsyg.2014.01488.
Burleigh, T. J., Schoenherr, J. R., & Lacroix, G. L. (2013). Does the uncanny valley exist? An empirical test of the relationship between eeriness and the human likeness of digitally created faces. Computers in Human Behavior, 29(3), 759-771, doi:10.1016/j.chb.2012.11.021.
Chaminade, T., Hodgins, J. & Kawato, M. (2007). Anthropomorphism influences perception of computer-animated characters' actions. Social Cognitive and Affective Neuroscience, 2(3), 206-216.
Cheetham, M., Suter, P., & Jancke, L. (2011). The human likeness dimension of the "uncanny valley hypothesis": behavioral and functional MRI findings. Front Hum Neurosci 5, 126.
Ferrey, A., Burleigh, T. J., & Fenske, M. (2015). Stimulus-category competition, inhibition and affective devaluation: A novel account of the Uncanny Valley. Frontiers in Psychology, 6:249, doi:10.3389/fpsyg.2015.00249.
Goetz, J., Kiesler, S., & Powers, A. (2003). Matching robot appearance and behavior to tasks to improve human-robot cooperation. Proceedings of the Twelfth IEEE International Workshop on Robot and Human Interactive Communication. Lisbon, Portugal.
Green, R. D., MacDorman, K. F., Ho, C.-C., & Vasudevan, S. K. (2008). Sensitivity to the proportions of faces that vary in human likeness. Computers in Human Behavior, 24(5), 2456–2474.
Ho, C.-C., MacDorman, K. F., & Pramono, Z. A. D. (2008). Human emotion and the uncanny valley: A GLM, MDS, and ISOMAP analysis of robot video ratings. Proceedings of the Third ACM/IEEE International Conference on Human-Robot Interaction. March 11–14. Amsterdam.
Ishiguro, H. (2005). Android science: Toward a new cross-disciplinary framework. CogSci-2005 Workshop: Toward Social Mechanisms of Android Science, 2005, pp. 1–6.
Kätsyri, J. & Förger, K. & Mäkäräinen, M. & Takala, T. 2015. A review of empirical evidence on different uncanny valley hypotheses: support for perceptual mismatch as one road to the valley of eeriness. Frontiers in Psychology.
MacDorman, K. F. (2005). Androids as an experimental apparatus: Why is there an uncanny valley and can we exploit it? CogSci-2005 Workshop: Toward Social Mechanisms of Android Science, 106-118. (An English translation of Mori's "The Uncanny Valley" made by Karl MacDorman and Takashi Minato appears in Appendix B of the paper.)
MacDorman, K. F. (2006). Subjective ratings of robot video clips for human likeness, familiarity, and eeriness: An exploration of the uncanny valley. ICCS/CogSci-2006 Long Symposium: Toward Social Mechanisms of Android Science. July 26, 2006. Vancouver, Canada.
MacDorman, K. F. & Ishiguro, H. (2006). The uncanny advantage of using androids in cognitive science research. Interaction Studies, 7(3), 297-337.
MacDorman, K. F., Vasudevan, S. K., & Ho, C.-C. (2009). Does Japan really have robot mania? Comparing attitudes by implicit and explicit measures. AI & Society, 23(4), 485-510.
MacDorman, K. F., Green, R. D., Ho, C.-C., & Koch, C. (2009). Too real for comfort: Uncanny responses to computer generated faces. Computers in Human Behavior, 25, 695-710.
Misselhorn, C. (2009). Empathy with inanimate objects and the uncanny valley. Minds and Machines, 19(3), 345-359.
Mitchell, W. J., Szerszen, Sr., K. A., Lu, A. S., Schermerhorn, P. W., Scheutz, M., & MacDorman, K. F. (2011). A mismatch in the human realism of face and voice produces an uncanny valley. i-Perception, 2(1), 10–12.
Moore, R. K. (2012). A Bayesian explanation of the ‘Uncanny Valley’ effect and related psychological phenomena. Nature Scientific Reports, 2, doi:10.1038/srep00864.
Mori, M. (1970/2012). The uncanny valley (K. F. MacDorman & N. Kageki, Trans.). IEEE Robotics & Automation Magazine, 19(2), 98–100. doi:10.1109/MRA.2012.2192811 See also http://spectrum.ieee.org/automaton/robotics/humanoids/an-uncanny-mind-masahiro-mori-on-the-uncanny-valley
Mori, M. (1970). Bukimi no tani. Energy, 7(4), 33–35. (Originally in Japanese)
Mori, M. (2005). On the Uncanny Valley. Proceedings of the Humanoids-2005 workshop: Views of the Uncanny Valley. 5 December 2005, Tsukuba, Japan.
Pollick, F. E. (forthcoming). In search of the uncanny valley. In Grammer, K. & Juette, A. (Eds.), Analog communication: Evolution, brain mechanisms, dynamics, simulation. The Vienna Series in Theoretical Biology. Cambridge, Mass.: The MIT Press.
Ramey, C.H. (2005). The uncanny valley of similarities concerning abortion, baldness, heaps of sand, and humanlike robots. In Proceedings of the Views of the Uncanny Valley Workshop, IEEE-RAS International Conference on Humanoid Robots.
Saygin, A.P., Chaminade, T., Ishiguro, H., Driver, J. & Frith, C. (2011) The thing that should not be: Predictive coding and the uncanny valley in perceiving human and humanoid robot actions. Social Cognitive Affective Neuroscience, 6(4).
Saygin, A.P., Chaminade, T., Ishiguro, H. (2010) The Perception of Humans and Robots: Uncanny Hills in Parietal Cortex. In S. Ohlsson & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp. 2716–2720). Austin, TX: Cognitive Science Society.
Schoenherr, J. R. & Burleigh, T. J. (2014). Uncanny sociocultural categories. Frontiers in Psychology, 5:1456, doi:10.3389/fpsyg.2014.01456.
Seyama, J., & Nagayama, R. S. (2007). The uncanny valley: Effect of realism on the impression of artificial human faces. Presence: Teleoperators and Virtual Environments, 16(4), 337-351.
Tinwell, A., Grimshaw, M., Abdel Nabi, D., & Williams, A. (2011) Facial expression of emotion and perception of the Uncanny Valley in virtual characters. Computers in Human Behavior, 27(2), pp. 741-749.
Tinwell, A., Grimshaw, M., & Williams, A. (2010) Uncanny Behaviour in Survival Horror Games. Journal of Gaming and Virtual Worlds, 2(1), pp. 3-25.
Tinwell, A., Grimshaw, M., & Williams, A. (2011) The Uncanny Wall. International Journal of Arts and Technology, 4(3), pp. 326-341.
Vinayagamoorthy, V. Steed, A. & Slater, M. (2005). Building Characters: Lessons Drawn from Virtual Environments. Toward Social Mechanisms of Android Science: A CogSci 2005 Workshop. July 25–26, Stresa, Italy, pp. 119–126.
Yamada, Y., Kawabe, T., & Ihaya, K. (2013). Categorization difficulty is associated with negative evaluation in the “uncanny valley” phenomenon. Japanese Psychological Research, 55(1), 20-32.

External links

  • The Uncanny Valley in Games and Animation - Dr A.Tinwell
  • Zysk, W., Filkov, R., Feldmann, S. (2013). Bridging the uncanny valley – From 3D humanoid Characters to Virtual Tutors. The Second International Conference on E-Learning and E-Technologies in Education, ICEEE2013, Lodz University of Technology, Sept. 23-25, 2013. ISBN 978-1-4673-5093-8
  • Your Brain on Androids UCSD news release about human brain and the uncanny valley.
  • Massimo Negrotti Study on the reality of artificial objects.
  • Held in Tsukuba, Japan, near Tokyo on December 5, 2005:
Humanoids-2005 Workshop
Views on the Uncanny Valley