Jump to content

Affective computing: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m Emotional speech: clean up / MOS changes to articles using AWB (12151)
Rescuing 2 sources and tagging 0 as dead. #IABot (v1.4beta4)
Line 7: Line 7:
|accessdate=May 13, 2008
|accessdate=May 13, 2008
|last=Kleine-Cosack
|last=Kleine-Cosack
|first=Christian
|first=Christian
|date=October 2006
|date=October 2006
|format=PDF
|format=PDF
Line 13: Line 13:
|archiveurl=https://web.archive.org/web/20080528135730/http://ls12-www.cs.tu-dortmund.de/~fink/lectures/SS06/human-robot-interaction/Emotion-RecognitionAndSimulation.pdf
|archiveurl=https://web.archive.org/web/20080528135730/http://ls12-www.cs.tu-dortmund.de/~fink/lectures/SS06/human-robot-interaction/Emotion-RecognitionAndSimulation.pdf
|archivedate=May 28, 2008
|archivedate=May 28, 2008
|deadurl=no
|deadurl=yes
|df=
|df=
}}
}}
Line 143: Line 143:
* [[Surprise (emotion)|Surprise]]
* [[Surprise (emotion)|Surprise]]


However, in the 1990s Ekman expanded his list of basic emotions, including a range of positive and negative emotions not all of which are encoded in facial muscles.<ref>{{Cite book | last = Ekman | first = Paul | authorlink = Paul Ekman | year = 1999 | url = http://www.paulekman.com/wp-content/uploads/2009/02/Basic-Emotions.pdf | contribution = Basic Emotions | editor1-first = T | editor1-last = Dalgleish | editor2-first = M | editor2-last = Power | title = Handbook of Cognition and Emotion | place = Sussex, UK | publisher = John Wiley & Sons }}.</ref> The newly included emotions are:
However, in the 1990s Ekman expanded his list of basic emotions, including a range of positive and negative emotions not all of which are encoded in facial muscles.<ref>{{Cite book|last=Ekman |first=Paul |authorlink=Paul Ekman |year=1999 |url=http://www.paulekman.com/wp-content/uploads/2009/02/Basic-Emotions.pdf |contribution=Basic Emotions |editor1-first=T |editor1-last=Dalgleish |editor2-first=M |editor2-last=Power |title=Handbook of Cognition and Emotion |place=Sussex, UK |publisher=John Wiley & Sons |deadurl=yes |archiveurl=https://web.archive.org/web/20101228085345/http://www.paulekman.com/wp-content/uploads/2009/02/Basic-Emotions.pdf |archivedate=2010-12-28 }}.</ref> The newly included emotions are:
# [[Amusement]]
# [[Amusement]]
# [[Contempt]]
# [[Contempt]]

Revision as of 14:45, 27 June 2017

Affective computing is the study and development of systems and devices that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer science, psychology, and cognitive science.[1] While the origins of the field may be traced as far back as to early philosophical inquiries into emotion,[2] the more modern branch of computer science originated with Rosalind Picard's 1995 paper[3] on affective computing.[4][5] A motivation for the research is the ability to simulate empathy. The machine should interpret the emotional state of humans and adapt its behavior to them, giving an appropriate response to those emotions.

The difference between sentiment analysis and affective analysis is that the latter detects the different emotions instead of identifying only the polarity of the phrase.

Areas

Detecting and recognizing emotional information

Detecting emotional information begins with passive sensors which capture data about the user's physical state or behavior without interpreting the input. The data gathered is analogous to the cues humans use to perceive emotions in others. For example, a video camera might capture facial expressions, body posture, and gestures, while a microphone might capture speech. Other sensors detect emotional cues by directly measuring physiological data, such as skin temperature and galvanic resistance.[6]

Recognizing emotional information requires the extraction of meaningful patterns from the gathered data. This is done using machine learning techniques that process different modalities, such as speech recognition, natural language processing, or facial expression detection, and produce either a labels (i.e. 'confused') or coordinates in a valence-arousal space.

Emotion in machines

Another area within affective computing is the design of computational devices proposed to exhibit either innate emotional capabilities or that are capable of convincingly simulating emotions. A more practical approach, based on current technological capabilities, is the simulation of emotions in conversational agents in order to enrich and facilitate interactivity between human and machine.[7] While human emotions are often associated with surges in hormones and other neuropeptides, emotions in machines might be associated with abstract states associated with progress (or lack of progress) in autonomous learning systems[citation needed]. In this view, affective emotional states correspond to time-derivatives (perturbations) in the learning curve of an arbitrary learning system.[citation needed]

Marvin Minsky, one of the pioneering computer scientists in artificial intelligence, relates emotions to the broader issues of machine intelligence stating in The Emotion Machine that emotion is "not especially different from the processes that we call 'thinking.'"[8]

Technologies

In cognitive science and neuroscience, there have been two leading models describing how humans perceive and classify emotion. the continuous and the categorical model. The continuous model defines each facial expression of emotion as a feature vector in a face space. This model explains, for example, how expressions of emotion can be seen at different intensities. In contrast, the categorical model consists of C classifiers, each tuned to a specific emotion category. This model explains, among other findings, why the images in a morphing sequence between a happy and a surprise face are perceived as either happy or surprise but not something in between.

These approaches have one major flaw in common: they can only detect one emotion from an image, this is generally done by a winner takes it all method. Yet, every day we can perceive more than one emotional category from a single image. Both the categorical and continuous model are unable to identify multiple emotions, so a new way to model it is to consider new categories as the overlap of a small set of categories. A detailed study related to this topic is given in "A model of the perception of facial expressions of emotion by humans: research overview and perspectives".[9]

The following sections consider the possible features which can be used for the task of emotion recognition.

Emotional speech

Various changes in the autonomic nervous system can indirectly alter a person's speech, and affective technologies can leverage this information to recognize emotion. For example, speech produced in a state of fear, anger, or joy becomes fast, loud, and precisely enunciated, with a higher and wider range in pitch, whereas emotions such as tiredness, boredom, or sadness tend to generate slow, low-pitched, and slurred speech.[10] Some emotions have been found to be more easily computationally identified, such as anger[11] or approval.[12]

Emotional speech processing technologies recognize the user's emotional state using computational analysis of speech features. Vocal parameters and prosodic features such as pitch variables and speech rate can be analyzed through pattern recognition techniques.[11][13]

Speech analysis is an effective method of identifying affective state, having an average reported accuracy of 70 to 80% in recent research.[14][15] These systems tend to outperform average human accuracy (approximately 60%[11]) but are less accurate than systems which employ other modalities for emotion detection, such as physiological states or facial expressions.[16] However, since many speech characteristics are independent of semantics or culture, this technique is considered to be a promising route for further research.[17]

Algorithms

The process of speech/text affect detection requires the creation of a reliable database, knowledge base, or vector space model,[18] broad enough to fit every need for its application, as well as the selection of a successful classifier which will allow for quick and accurate emotion identification.

Currently, the most frequently used classifiers are linear discriminant classifiers (LDC), k-nearest neighbor (k-NN), Gaussian mixture model (GMM), support vector machines (SVM), artificial neural networks (ANN), decision tree algorithms and hidden Markov models (HMMs).[19] Various studies showed that choosing the appropriate classifier can significantly enhance the overall performance of the system.[16] The list below gives a brief description of each algorithm:

  • LDC – Classification happens based on the value obtained from the linear combination of the feature values, which are usually provided in the form of vector features.
  • k-NN – Classification happens by locating the object in the feature space, and comparing it with the k nearest neighbors (training examples). The majority vote decides on the classification.
  • GMM – is a probabilistic model used for representing the existence of subpopulations within the overall population. Each sub-population is described using the mixture distribution, which allows for classification of observations into the sub-populations.[20]
  • SVM – is a type of (usually binary) linear classifier which decides in which of the two (or more) possible classes, each input may fall into.
  • ANN – is a mathematical model, inspired by biological neural networks, that can better grasp possible non-linearities of the feature space.
  • Decision tree algorithms – work based on following a decision tree in which leaves represent the classification outcome, and branches represent the conjunction of subsequent features that lead to the classification.
  • HMMs – a statistical Markov model in which the states and state transitions are not directly available to observation. Instead, the series of outputs dependent on the states are visible. In the case of affect recognition, the outputs represent the sequence of speech feature vectors, which allow the deduction of states' sequences through which the model progressed. The states can consist of various intermediate steps in the expression of an emotion, and each of them has a probability distribution over the possible output vectors. The states' sequences allow us to predict the affective state which we are trying to classify, and this is one of the most commonly used techniques within the area of speech affect detection.

It is proved that having enough acoustic evidence available the emotional state of a person can be classified by a set of majority voting classifiers. The proposed set of classifiers is based on three main classifiers: kNN, C4.5 and SVM-RBF Kernel. This set achieves better performance than each basic classifier taken separately. It is compared with two other sets of classifiers: one-against-all (OAA) multiclass SVM with Hybrid kernels and the set of classifiers which consists of the following two basic classifiers: C5.0 and Neural Network. The proposed variant achieves better performance than the other two sets of classifiers.[21]

Databases

The vast majority of present systems are data-dependent. This creates one of the biggest challenges in detecting emotions based on speech, as it implicates choosing an appropriate database used to train the classifier. Most of the currently possessed data was obtained from actors and is thus a representation of archetypal emotions. Those so-called acted databases are usually based on the Basic Emotions theory (by Paul Ekman), which assumes the existence of six basic emotions (anger, fear, disgust, surprise, joy, sadness), the others simply being a mix of the former ones.[22] Nevertheless, these still offer high audio quality and balanced classes (although often too few), which contribute to high success rates in recognizing emotions.

However, for real life application, naturalistic data is preferred. A naturalistic database can be produced by observation and analysis of subjects in their natural context. Ultimately, such database should allow the system to recognize emotions based on their context as well as work out the goals and outcomes of the interaction. The nature of this type of data allows for authentic real life implementation, due to the fact it describes states naturally occurring during the human-computer interaction (HCI).

Despite the numerous advantages which naturalistic data has over acted data, it is difficult to obtain and usually has low emotional intensity. Moreover, data obtained in a natural context has lower signal quality, due to surroundings noise and distance of the subjects from the microphone. The first attempt to produce such database was the FAU Aibo Emotion Corpus for CEICES (Combining Efforts for Improving Automatic Classification of Emotional User States), which was developed based on a realistic context of children (age 10-13) playing with Sony's Aibo robot pet.[23][24] Likewise, producing one standard database for all emotional research would provide a method of evaluating and comparing different affect recognition systems.

Speech descriptors

The complexity of the affect recognition process increases with the number of classes (affects) and speech descriptors used within the classifier. It is, therefore, crucial to select only the most relevant features in order to assure the ability of the model to successfully identify emotions, as well as increasing the performance, which is particularly significant to real-time detection. The range of possible choices is vast, with some studies mentioning the use of over 200 distinct features.[19] It is crucial to identify those that are redundant and undesirable in order to optimize the system and increase the success rate of correct emotion detection. The most common speech characteristics are categorized into the following groups.[23][24]

  1. Frequency characteristics
    • Accent shape – affected by the rate of change of the fundamental frequency.
    • Average pitch – description of how high/low the speaker speaks relative to the normal speech.
    • Contour slope – describes the tendency of the frequency change over time, it can be rising, falling or level.
    • Final lowering – the amount by which the frequency falls at the end of an utterance.
    • Pitch range – measures the spread between the maximum and minimum frequency of an utterance.
  2. Time-related features:
    • Speech rate – describes the rate of words or syllables uttered over a unit of time
    • Stress frequency – measures the rate of occurrences of pitch accented utterances
  3. Voice quality parameters and energy descriptors:
    • Breathiness – measures the aspiration noise in speech
    • Brilliance – describes the dominance of high Or low frequencies In the speech
    • Loudness – measures the amplitude of the speech waveform, translates to the energy of an utterance
    • Pause Discontinuity – describes the transitions between sound and silence
    • Pitch Discontinuity – describes the transitions of the fundamental frequency.

Facial affect detection

The detection and processing of facial expression are achieved through various methods such as optical flow, hidden Markov models, neural network processing or active appearance models. More than one modalities can be combined or fused (multimodal recognition, e.g. facial expressions and speech prosody,[25] facial expressions and hand gestures,[26] or facial expressions with speech and text for multimodal data and metadata analysis) to provide a more robust estimation of the subject's emotional state.

Facial expression databases

Creation of an emotion database is a difficult and time-consuming task. However, database creation is an essential step in the creation of a system that will recognize human emotions. Most of the publicly available emotion databases include posed facial expressions only. In posed expression databases, the participants are asked to display different basic emotional expressions, while in spontaneous expression database, the expressions are natural. Spontaneous emotion elicitation requires significant effort in the selection of proper stimuli which can lead to a rich display of intended emotions. Secondly, the process involves tagging of emotions by trained individuals manually which makes the databases highly reliable. Since perception of expressions and their intensity is subjective in nature, the annotation by experts is essential for the purpose of validation.

Researchers work with three types of databases, such as a database of peak expression images only, a database of image sequences portraying an emotion from neutral to its peak, and video clips with emotional annotations. Many facial expression databases have been created and made public for expression recognition purpose. Two of the widely used databases are CK+ and JAFFE.

Emotion classification

By doing cross-cultural research in Papua New Guinea, on the Fore Tribesmen, at the end of the 1960s, Paul Ekman proposed the idea that facial expressions of emotion are not culturally determined, but universal. Thus, he suggested that they are biological in origin and can, therefore, be safely and correctly categorized.[22] He therefore officially put forth six basic emotions, in 1972:[27]

However, in the 1990s Ekman expanded his list of basic emotions, including a range of positive and negative emotions not all of which are encoded in facial muscles.[28] The newly included emotions are:

  1. Amusement
  2. Contempt
  3. Contentment
  4. Embarrassment
  5. Excitement
  6. Guilt
  7. Pride in achievement
  8. Relief
  9. Satisfaction
  10. Sensory pleasure
  11. Shame

Facial Action Coding System

Defining expressions in terms of muscle actions A system has been conceived in order to formally categorize the physical expression of emotions. The central concept of the Facial Action Coding System, or FACS, as created by Paul Ekman and Wallace V. Friesen in 1978[29] are action units (AU). They are, basically, a contraction or a relaxation of one or more muscles. However, as simple as this concept may seem, it is enough to form the base of a complex and devoid of interpretation emotional identification system.

By identifying different facial cues, scientists are able to map them to their corresponding action unit code. Consequently, they have proposed the following classification of the six basic emotions, according to their action units ("+" here mean "and"):

Emotion Action units
Happiness 6+12
Sadness 1+4+15
Surprise 1+2+5B+26
Fear 1+2+4+5+20+26
Anger 4+5+7+23
Disgust 9+15+16
Contempt R12A+R14A

Challenges in facial detection

As with every computational practice, in affect detection by facial processing, some obstacles need to be surpassed, in order to fully unlock the hidden potential of the overall algorithm or method employed. The accuracy of modeling and tracking has been an issue, especially in the incipient stages of affective computing. As hardware evolves, as new discoveries are made and new practices introduced, this lack of accuracy fades, leaving behind noise issues. However, methods for noise removal exist including neighborhood averaging, linear Gaussian smoothing, median filtering,[30] or newer methods such as the Bacterial Foraging Optimization Algorithm.[31][32][33]

It is generally known that the degree of accuracy in facial recognition (not affective state recognition) has not been brought to a level high enough to permit its widespread efficient use across the world (there have been many attempts, especially by law enforcement, which failed at successfully identifying criminals). Without improving the accuracy of hardware and software used to scan faces, progress is very much slowed down.

Other challenges include

  • The fact that posed expressions, as used by most subjects of the various studies, are not natural, and therefore not 100% accurate.
  • The lack of rotational movement freedom. Affect detection works very well with frontal use, but upon rotating the head more than 20 degrees, "there've been problems".[34]

Body gesture

Gestures could be efficiently used as a means of detecting a particular emotional state of the user, especially when used in conjunction with speech and face recognition. Depending on the specific action, gestures could be simple reflexive responses, like lifting your shoulders when you don't know the answer to a question, or they could be complex and meaningful as when communicating with sign language. Without making use of any object or surrounding environment, we can wave our hands, clap or beckon. On the other hand, when using objects, we can point at them, move, touch or handle these. A computer should be able to recognize these, analyze the context and respond in a meaningful way, in order to be efficiently used for Human-Computer Interaction.

There are many proposed methods[35] to detect the body gesture. Some literature differentiates 2 different approaches in gesture recognition: a 3D model based and an appearance-based.[36] The foremost method makes use of 3D information of key elements of the body parts in order to obtain several important parameters, like palm position or joint angles. On the other hand, Appearance-based systems use images or videos to for direct interpretation. Hand gestures have been a common focus of body gesture detection, apparentness[vague] methods[36] and 3-D modeling methods are traditionally used.

Physiological monitoring

This could be used to detect a user's emotional state by monitoring and analyzing their physiological signs. These signs range from their pulse and heart rate to the minute contractions of the facial muscles. This area of research is still in relative infancy as there seems to be more of a drive towards affect recognition through facial inputs. Nevertheless, this area is gaining momentum and we are now seeing real products which implement the techniques. The three main physiological signs that can be analyzed are blood volume pulse, galvanic skin response, facial electromyography.

Blood volume pulse

Overview

A subject's blood volume pulse (BVP) can be measured by a process called photoplethysmography, which produces a graph indicating blood flow through the extremities.[37] The peaks of the waves indicate a cardiac cycle where the heart has pumped blood to the extremities. If the subject experiences fear or is startled, their heart usually 'jumps' and beats quickly for some time, causing the amplitude of the cardiac cycle to increase. This can clearly be seen on a photoplethysmograph when the distance between the trough and the peak of the wave has decreased. As the subject calms down, and as the body's inner core expands, allowing more blood to flow back to the extremities, the cycle will return to normal.

Methodology

Infra-red light is shone on the skin by special sensor hardware, and the amount of light reflected is measured. The amount of reflected and transmitted light correlates to the BVP as light is absorbed by hemoglobin which is found richly in the blood stream.

Disadvantages

It can be cumbersome to ensure that the sensor shining an infra-red light and monitoring the reflected light is always pointing at the same extremity, especially seeing as subjects often stretch and readjust their position whilst using a computer. There are other factors which can affect one's blood volume pulse. As it is a measure of blood flow through the extremities, if the subject feels hot, or particularly cold, then their body may allow more, or less, blood to flow to the extremities, all of this regardless of the subject's emotional state.

The corrugator supercilii muscle and zygomaticus major muscle are the 2 main muscles used for measuring the electrical activity, in facial electromyography

Facial electromyography

Facial electromyography is a technique used to measure the electrical activity of the facial muscles by amplifying the tiny electrical impulses that are generated by muscle fibers when they contract.[38] The face expresses a great deal of emotion, however, there are two main facial muscle groups that are usually studied to detect emotion: The corrugator supercilii muscle, also known as the 'frowning' muscle, draws the brow down into a frown, and therefore is the best test for negative, unpleasant emotional response.↵The zygomaticus major muscle is responsible for pulling the corners of the mouth back when you smile, and therefore is the muscle used to test for a positive emotional response.

Here we can see a plot of skin resistance measured using GSR and time whilst the subject played a video game. There are several peaks that are clear in the graph, which suggests that GSR is a good method of differentiating between an aroused and a non-aroused state. For example, at the start of the game where there is usually not much exciting game play, there is a high level of resistance recorded, which suggests a low level of conductivity and therefore less arousal. This is in clear contrast with the sudden trough where the player is killed as one is usually very stressed and tense as their character is killed in the game

Galvanic skin response

Galvanic skin response (GSR) is a measure of skin conductivity, which is dependent on how moist the skin is. As the sweat glands produce this moisture and the glands are controlled by the body's nervous system, there is a correlation between GSR and the arousal state of the body. The more aroused a subject is, the greater the skin conductivity and GSR reading.[37]

It can be measured using two small silver chloride electrodes placed somewhere on the skin and applying a small voltage between them. The conductance is measured by a sensor. To maximize comfort and reduce irritation the electrodes can be placed on the feet, which leaves the hands fully free to interface with the keyboard and mouse.

Visual aesthetics

Aesthetics, in the world of art and photography, refers to the principles of the nature and appreciation of beauty. Judging beauty and other aesthetic qualities is a highly subjective task. Computer scientists at Penn State treat the challenge of automatically inferring the aesthetic quality of pictures using their visual content as a machine learning problem, with a peer-rated on-line photo sharing website as a data source.[39] They extract certain visual features based on the intuition that they can discriminate between aesthetically pleasing and displeasing images.

Potential applications

In e-learning applications, affective computing can be used to adjust the presentation style of a computerized tutor when a learner is bored, interested, frustrated, or pleased.[40][41] Psychological health services, i.e. counseling, benefit from affective computing applications when determining a client's emotional state.[citation needed]

Robotic systems capable of processing affective information exhibit higher flexibility while one works in uncertain or complex environments. Companion devices, such as digital pets, use affective computing abilities to enhance realism and provide a higher degree of autonomy.[citation needed]

Other potential applications are centered around social monitoring. For example, a car can monitor the emotion of all occupants and engage in additional safety measures, such as alerting other vehicles if it detects the driver to be angry.[citation needed] Affective computing has potential applications in human-computer interaction, such as affective mirrors allowing the user to see how he or she performs; emotion monitoring agents sending a warning before one sends an angry email; or even music players selecting tracks based on mood.[42]

One idea put forth by the Romanian researcher Dr. Nicu Sebe in an interview is the analysis of a person's face while they are using a certain product (he mentioned ice cream as an example).[43] Companies would then be able to use such analysis to infer whether their product will or will not be well received by the respective market.

One could also use affective state recognition in order to judge the impact of a TV advertisement through a real-time video recording of that person and through the subsequent study of his or her facial expression. Averaging the results obtained on a large group of subjects, one can tell whether that commercial (or movie) has the desired effect and what the elements which interest the watcher most are.

Affective computing is also being applied to the development of communicative technologies for use by people with autism.[44]

Video games

Affective video games can access their players' emotional states through biofeedback devices.[45] A particularly simple form of biofeedback is available through gamepads that measure the pressure with which a button is pressed: this has been shown to correlate strongly with the players' level of arousal;[46] at the other end of the scale are brain–computer interfaces.[47][48] Affective games have been used in medical research to support the emotional development of autistic children.[49]

Critical perspectives

Mainstream affective computing, as it has been characterized above, is critically discussed, e.g., within the field of human-computer interaction.

When Rosalind Picard coined the term 'affective computing', she outlined a cognitivist research program whose goal it is to " ... give computers the ability to recognize, express, and in some cases, 'have' emotions".[50] A range of researchers have criticized this research program and outlined a post-cognitivist, "interactional" perspective which, as Kirsten Boehner and collaborators suggest, " ... take[s] emotion as a social and cultural product experienced through our interactions".[51][52][53] They criticize the Picardian approach for its cognitivist notion of emotion that they also describe as an "information model" of emotion:

Both cognition and emotion are construed here as inherently private and information-based: biopsychological events that occur entirely within the body. Like cognition, emotion can be modeled as a form of information processing, and another set of inputs to cognitive processing. This information account of emotion talks about it as a form of internal signaling, providing a context for cognitive action.[52]: 278 

The information model treats emotion as "objective, internal, private, and mechanistic". It reduces emotion to discrete psychological signal that is assumed to be formalizable and measurable in rather unproblematic ways.[52]: 280  Critics of the Picardian approach to affective computing hold that such an understanding of emotion undercuts the complexity of emotional experience.

The post-cognitive, interactional approach to affective computing departs from the Picardian research program in three ways: First, it adopts a notion of emotion as constituted in social interaction. This is not to deny that emotion has biophysical aspects, but it is to underline that emotion is "culturally grounded, dynamically experienced, and to some degree constructed in action and interaction".[52]: 276  Second, the interactional approach does not seek to enhance the affect-processing capacities of computer systems. Rather, it seeks to help " ... people to understand and experience their own emotions".[52] Third, the interactional approach accordingly adopts different design and evaluation strategies than those described by the Picardian research program. Interactional affective design supports open-ended, (inter-)individual processes of affect interpretation. It recognizes the context-sensitive, subjective, changing and possibly ambiguous character of affect interpretation. And it takes into account that these sense-making efforts and affect itself may resist a computational formalization.[52]: 284 

To summarize, Picard and her adherents pursue a cognitivist measuring approach to users' affect, while the interactional perspective endorses a pragmatist approach that views (emotional) experience as inherently referring to social interaction.[54] While the Picardian approach, thus, focuses on human-machine relations, interactional affective computing focuses primarily on computer-mediated interpersonal communication. And while the Picardian approach is concerned with the measurement and modeling of physiological variables, interactional affective computing is concerned with emotions as complex subjective interpretations of affect, arguing that emotions, not affect, are at stake from the point of view of technology users.

See also

2

References

  1. ^ Tao, Jianhua; Tieniu Tan (2005). "Affective Computing: A Review". Affective Computing and Intelligent Interaction. Vol. LNCS 3784. Springer. pp. 981–995. doi:10.1007/11573548. {{cite conference}}: Unknown parameter |booktitle= ignored (|book-title= suggested) (help)
  2. ^ James, William (1884). "What is Emotion". Mind. 9: 188–205. doi:10.1093/mind/os-IX.34.188. Cited by Tao and Tan.
  3. ^ "Affective Computing" MIT Technical Report #321 (Abstract), 1995
  4. ^ Kleine-Cosack, Christian (October 2006). "Recognition and Simulation of Emotions" (PDF). Archived from the original (PDF) on May 28, 2008. Retrieved May 13, 2008. The introduction of emotion to computer science was done by Pickard (sic) who created the field of affective computing. {{cite web}}: Unknown parameter |deadurl= ignored (|url-status= suggested) (help)
  5. ^ Diamond, David (December 2003). "The Love Machine; Building computers that care". Wired. Archived from the original on 18 May 2008. Retrieved May 13, 2008. Rosalind Picard, a genial MIT professor, is the field's godmother; her 1997 book, Affective Computing, triggered an explosion of interest in the emotional side of computers and their users. {{cite web}}: Unknown parameter |deadurl= ignored (|url-status= suggested) (help)
  6. ^ Garay, Nestor; Idoia Cearreta; Juan Miguel López; Inmaculada Fajardo (April 2006). "Assistive Technology and Affective Mediation" (PDF). Human Technology. 2 (1): 55–83. Archived from the original (PDF) on 28 May 2008. Retrieved 2008-05-12. {{cite journal}}: Unknown parameter |deadurl= ignored (|url-status= suggested) (help)
  7. ^ Heise, David (2004). "Enculturating agents with expressive role behavior". In Sabine Payr; Trappl, Robert (eds.). Agent Culture: Human-Agent Interaction in a Mutlicultural World. Lawrence Erlbaum Associates. pp. 127–142.
  8. ^ Restak, Richard (2006-12-17). "Mind Over Matter". The Washington Post. Retrieved 2008-05-13.
  9. ^ Aleix, and Shichuan Du, Martinez (2012). "A model of the perception of facial expressions of emotion by humans: Research overview and perspectives". The Journal of Machine Learning Research. 13 (1): 1589–1608.
  10. ^ Breazeal, C. and Aryananda, L. Recognition of affective communicative intent in robot-directed speech. Autonomous Robots 12 1, 2002. pp. 83–104.
  11. ^ a b c Dellaert, F., Polizin, t., and Waibel, A., Recognizing Emotion in Speech", In Proc. Of ICSLP 1996, Philadelphia, PA, pp.1970-1973, 1996
  12. ^ Roy, D.; Pentland, A. (1996-10-01). "Automatic spoken affect classification and analysis". Proceedings of the Second International Conference on Automatic Face and Gesture Recognition: 363–367. doi:10.1109/AFGR.1996.557292.
  13. ^ Lee, C.M.; Narayanan, S.; Pieraccini, R., Recognition of Negative Emotion in the Human Speech Signals, Workshop on Auto. Speech Recognition and Understanding, Dec 2001
  14. ^ Neiberg, D; Elenius, K; Laskowski, K (2006). "Emotion recognition in spontaneous speech using GMMs" (PDF). Proceedings of Interspeech.
  15. ^ Yacoub, Sherif; Simske, Steve; Lin, Xiaofan; Burns, John (2003). "Recognition of Emotions in Interactive Voice Response Systems". Proceedings of Eurospeech: 1–4.
  16. ^ a b Hudlicka 2003, p. 24
  17. ^ Hudlicka 2003, p. 25
  18. ^ Charles Osgood; William May; Murray Miron (1975). Cross-Cultural Universals of Affective Meaning. Univ. of Illinois Press. ISBN 978-94-007-5069-2.
  19. ^ a b Scherer 2010, p. 241
  20. ^ "Gaussian Mixture Model". Connexions – Sharing Knowledge and Building Communities. Retrieved 10 March 2011.
  21. ^ S.E. Khoruzhnikov; et al. (2014). "Extended speech emotion recognition and prediction". Scientific and Technical Journal of Information Technologies, Mechanics and Optics. 14 (6): 137.
  22. ^ a b Ekman, P. & Friesen, W. V (1969). The repertoire of nonverbal behavior: Categories, origins, usage, and coding. Semiotica, 1, 49–98.
  23. ^ a b Steidl, Stefan (5 March 2011). "FAU Aibo Emotion Corpus". Pattern Recognition Lab.
  24. ^ a b Scherer 2010, p. 243
  25. ^ Caridakis, G.; Malatesta, L.; Kessous, L.; Amir, N.; Raouzaiou, A.; Karpouzis, K. (November 2–4, 2006). Modeling naturalistic affective states via facial and vocal expressions recognition. International Conference on Multimodal Interfaces (ICMI'06). Banff, Alberta, Canada.
  26. ^ Balomenos, T.; Raouzaiou, A.; Ioannou, S.; Drosopoulos, A.; Karpouzis, K.; Kollias, S. (2004). "Emotion Analysis in Man-Machine Interaction Systems". In Bengio, Samy; Bourlard, Herve (eds.). Machine Learning for Multimodal Interaction. Lecture Notes in Computer Science. Vol. 3361. Springer-Verlag. pp. 318–328.
  27. ^ Ekman, Paul (1972). Cole, J. (ed.). Universals and Cultural Differences in Facial Expression of Emotion. Nebraska Symposium on Motivation. Lincoln, Nebraska: University of Nebraska Press. pp. 207–283.
  28. ^ Ekman, Paul (1999). "Basic Emotions". In Dalgleish, T; Power, M (eds.). Handbook of Cognition and Emotion (PDF). Sussex, UK: John Wiley & Sons. Archived from the original (PDF) on 2010-12-28. {{cite book}}: Unknown parameter |deadurl= ignored (|url-status= suggested) (help).
  29. ^ "Facial Action Coding System (FACS) and the FACS Manual" Archived October 19, 2013, at the Wayback Machine. A Human Face. Retrieved 21 March 2011.
  30. ^ "Spatial domain methods".
  31. ^ Clever Algorithms. "Bacterial Foraging Optimization Algorithm – Swarm Algorithms – Clever Algorithms". Clever Algorithms. Retrieved 21 March 2011.
  32. ^ "Soft Computing". Soft Computing. Retrieved 18 March 2011.
  33. ^ "Hybrid Technique for Human Face Emotion Detection" (PDF). International Journal of Advanced Computer Science and Applications. 1 (6): 91–101. 2010. doi:10.14569/IJACSA.2010.010615]. Retrieved 11 March 2011. {{cite journal}}: Unknown parameter |authors= ignored (help)
  34. ^ Williams, Mark. "Better Face-Recognition Software – Technology Review". Technology Review: The Authority on the Future of Technology. Retrieved 21 March 2011.
  35. ^ J. K. Aggarwal, Q. Cai, Human Motion Analysis: A Review, Computer Vision and Image Understanding, Vol. 73, No. 3, 1999
  36. ^ a b Pavlovic, Vladimir I.; Sharma, Rajeev; Huang, Thomas S. (1997). "Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence.
  37. ^ a b Picard, Rosalind (1998). Affective Computing. MIT.
  38. ^ Larsen JT, Norris CJ, Cacioppo JT, "Effects of positive and negative affect on electromyographic activity over zygomaticus major and corrugator supercilii", (September 2003)
  39. ^ Ritendra Datta, Dhiraj Joshi, Jia Li and James Z. Wang, Studying Aesthetics in Photographic Images Using a Computational Approach, Lecture Notes in Computer Science, vol. 3953, Proceedings of the European Conference on Computer Vision, Part III, pp. 288-301, Graz, Austria, May 2006.
  40. ^ AutoTutor
  41. ^ "Estimation of behavioral user state based on eye gaze and head pose—application in an e-learning environment". Multimedia Tools and Applications. 41 (3). Springer: 469–493. 2009. {{cite journal}}: Unknown parameter |authors= ignored (help)
  42. ^ Janssen, Joris H.; van den Broek, Egon L. (July 2012). "Tune in to Your Emotions: A Robust Personalized Affective Music Player". User Modeling and User-Adapted Interaction. 22 (3): 255–279. doi:10.1007/s11257-011-9107-7. Retrieved 29 March 2016.
  43. ^ "Mona Lisa: Smiling? Computer Scientists Develop Software That Evaluates Facial Expressions". ScienceDaily. 1 August 2006. Archived from the original on 19 October 2007. {{cite web}}: Unknown parameter |deadurl= ignored (|url-status= suggested) (help)
  44. ^ Projects in Affective Computing
  45. ^ Gilleade, Kiel Mark; Dix, Alan; Allanson, Jen (2005). Affective Videogames and Modes of Affective Gaming: Assist Me, Challenge Me, Emote Me (PDF). Proc. DiGRA Conf.
  46. ^ Sykes, Jonathan; Brown, Simon (2003). Affective gaming: Measuring emotion through the gamepad. CHI '03 Extended Abstracts on Human Factors in Computing Systems. CiteSeerX 10.1.1.92.2123. doi:10.1145/765891.765957. ISBN 1581136374.
  47. ^ Nijholt, Anton; Plass-Oude Bos, Danny; Reuderink, Boris (2009). "Turning shortcomings into challenges: Brain–computer interfaces for games". Entertainment Computing. 1 (2): 85–94. doi:10.1016/j.entcom.2009.09.007.
  48. ^ Reuderink, Boris; Nijholt, Anton; Poel, Mannes (2009). Affective Pacman: A Frustrating Game for Brain-Computer Interface Experiments. Intelligent Technologies for Interactive Entertainment (INTETAIN). pp. 221–227. doi:10.1007/978-3-642-02315-6_23. ISBN 978-3-642-02314-9.
  49. ^ Khandaker, M (2009). "Designing affective video games to support the social-emotional development of teenagers with autism spectrum disorders". Studies in health technology and informatics. 144: 37–9. PMID 19592726.
  50. ^ Picard, Rosalind (1997). Affective Computing. Cambridge, MA: MIT Press. p. 1.
  51. ^ Boehner, Kirsten; DePaula, Rogerio; Dourish, Paul; Sengers, Phoebe (2005). "Affection: From Information to Interaction". Proceedings of the Aarhus Decennial Conference on Critical Computing: 59–68.
  52. ^ a b c d e f Boehner, Kirsten; DePaula, Rogerio; Dourish, Paul; Sengers, Phoebe (2007). "How emotion is made and measured". International Journal of Human-Computer Studies. 65 (4): 275–291. doi:10.1016/j.ijhcs.2006.11.016.
  53. ^ Hook, Kristina; Staahl, Anna; Sundstrom, Petra; Laaksolahti, Jarmo (2008). "Interactional empowerment" (PDF). Proc. CHI: 647–656.
  54. ^ Battarbee, Katja; Koskinen, Ilpo (2005). "Co-experience: user experience as interaction" (PDF). CoDesign. 1 (1): 5–18.

Sources

  • Hudlicka, Eva (2003). "To feel or not to feel: The role of affect in human-computer interaction". International Journal of Human-Computer Studies. 59 (1–2): 1–32. CiteSeerX 10.1.1.180.6429. doi:10.1016/s1071-5819(03)00047-8. {{cite journal}}: Invalid |ref=harv (help)
  • Scherer, Klaus R; Banziger, T; Roesch, Etienne B (2010). A blueprint for affective computing: a sourcebook. Oxford: Oxford University Press.