Emotion recognition: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Reverted 1 edit by Habtewold1- (talk). (TW)
extended article
Line 9: Line 9:
This process leverages techniques from multiple areas, such as [[signal processing]], [[machine learning]], and [[computer vision]]. The computers use different methods to interpret emotion such as [[Bayesian network]]s.<ref>Miyakoshi, Yoshihiro, and Shohei Kato. [https://www.opensource.gov/providers/ieee/xpl/articleDetails.jsp?arnumber=5958891&newsearch=true&queryText=Facial%20Emotion%20Detection%20Considering%20Partial%20Occlusion%20of%20Face%20Using%20Baysian%20Network "Facial Emotion Detection Considering Partial Occlusion Of Face Using Baysian Network"]. Computers and Informatics (2011): 96–101.</ref>
This process leverages techniques from multiple areas, such as [[signal processing]], [[machine learning]], and [[computer vision]]. The computers use different methods to interpret emotion such as [[Bayesian network]]s.<ref>Miyakoshi, Yoshihiro, and Shohei Kato. [https://www.opensource.gov/providers/ieee/xpl/articleDetails.jsp?arnumber=5958891&newsearch=true&queryText=Facial%20Emotion%20Detection%20Considering%20Partial%20Occlusion%20of%20Face%20Using%20Baysian%20Network "Facial Emotion Detection Considering Partial Occlusion Of Face Using Baysian Network"]. Computers and Informatics (2011): 96–101.</ref>
, Gaussian [[Mixture_model|Mixture models]]<ref>Hari Krishna Vydana, P. Phani Kumar, K. Sri Rama Krishna and Anil Kumar Vuppala. [https://ieeexplore.ieee.org/document/7058214/references "Improved emotion recognition using GMM-UBMs"]. 2015 International Conference on Signal Processing and Communication Engineering Systems </ref> and [[Hidden_Markov_model|Hidden Markov Models]]<ref> B. Schuller, G. Rigoll M. Lang. [https://ieeexplore.ieee.org/document/1220939/ "Hidden Markov model-based speech emotion recognition"]. ICME '03. Proceedings. 2003 International Conference on Multimedia and Expo, 2003.</ref>.
, Gaussian [[Mixture_model|Mixture models]]<ref>Hari Krishna Vydana, P. Phani Kumar, K. Sri Rama Krishna and Anil Kumar Vuppala. [https://ieeexplore.ieee.org/document/7058214/references "Improved emotion recognition using GMM-UBMs"]. 2015 International Conference on Signal Processing and Communication Engineering Systems </ref> and [[Hidden_Markov_model|Hidden Markov Models]]<ref> B. Schuller, G. Rigoll M. Lang. [https://ieeexplore.ieee.org/document/1220939/ "Hidden Markov model-based speech emotion recognition"]. ICME '03. Proceedings. 2003 International Conference on Multimedia and Expo, 2003.</ref>.

===Approaches===

The task of [[emotion recognition]] often involves the analysis of human expressions in multimodal forms such as texts, audio, or video.<ref>{{cite journal |last1=Poria |first1=Soujanya |last2=Cambria |first2=Erik |last3=Bajpai |first3=Rajiv |last4=Hussain |first4=Amir |title=A review of affective computing: From unimodal analysis to multimodal fusion |journal=Information Fusion |date=September 2017 |volume=37 |pages=98–125 |doi=10.1016/j.inffus.2017.02.003}}</ref> Different [[emotion]] types are detected through the integration of information from [[facial expressions]], body movement and gestures, and speech.<ref>{{cite journal |last1=Caridakis |first1=George |last2=Castellano |first2=Ginevra |last3=Kessous |first3=Loic |last4=Raouzaiou |first4=Amaryllis |last5=Malatesta |first5=Lori |last6=Asteriadis |first6=Stelios |last7=Karpouzis |first7=Kostas |title=Multimodal emotion recognition from expressive faces, body gestures and speech |journal=IFIP The International Federation for Information Processing |date=19 September 2007 |pages=375–388 |doi=10.1007/978-0-387-74161-1_41 |url=https://doi.org/10.1007/978-0-387-74161-1_41 |publisher=Springer US |language=en}}</ref> The existing approaches in [[emotion recognition]] to classify certain [[emotion]] types can be generally classified into three main categories: knowledge-based techniques, statistical methods, and hybrid approaches.<ref name = "s1">{{cite journal |last1=Cambria |first1=Erik |title=Affective Computing and Sentiment Analysis |journal=IEEE Intelligent Systems |date=March 2016 |volume=31 |issue=2 |pages=102–107 |doi=10.1109/MIS.2016.31}}</ref>

====Knowledge-based Techniques====

Knowledge-based techniques (sometimes referred to as [[lexicon]]-based techniques), utilize domain knowledge and the [[semantic]] and [[syntactic]] characteristics of language in order to detect certain [[emotion]] types.<ref name="s2">{{cite journal |last1=Rani |first1=Meesala Shobha |last2=S |first2=Sumathy |title=Perspectives of the performance metrics in lexicon and hybrid based approaches: a review |journal=International Journal of Engineering & Technology |date=26 September 2017 |volume=6 |issue=4 |pages=108 |doi=10.14419/ijet.v6i4.8295}}</ref> In this approach, it is common to use knowledge-based resources during the emotion classification process such as [[WordNet]], SenticNet<ref>{{cite journal |last1=Cambria |first1=Erik |last2=Poria |first2=Soujanya |last3=Bajpai |first3=Rajiv |last4=Schuller |first4=Bjoern |title=SenticNet 4: A Semantic Resource for Sentiment Analysis Based on Conceptual Primitives |journal=Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers |date=2016 |url=https://aclanthology.info/papers/C16-1251/c16-1251 |language=en}}</ref>, [[ConceptNet]], and EmotiNet<ref>{{cite journal |last1=Balahur |first1=Alexandra |last2=Hermida |first2=JesúS M. |last3=Montoyo |first3=AndréS |title=Detecting implicit expressions of emotion in text: A comparative analysis |journal=Decision Support Systems |date=1 November 2012 |volume=53 |issue=4 |pages=742–753 |doi=10.1016/j.dss.2012.05.024 |url=https://dl.acm.org/citation.cfm?id=2364904 |issn=0167-9236}}</ref>, to name a few.<ref name = "s6">{{cite journal |last1=Medhat |first1=Walaa |last2=Hassan |first2=Ahmed |last3=Korashy |first3=Hoda |title=Sentiment analysis algorithms and applications: A survey |journal=Ain Shams Engineering Journal |date=December 2014 |volume=5 |issue=4 |pages=1093–1113 |doi=10.1016/j.asej.2014.04.011}}</ref> One of the advantages of this approach is the accessibility and economy brought about by the large availability of such knowledge-based resources.<ref name = "s1"></ref> A limitation of this technique on the other hand, is its inability to handle concept nuances and complex linguistic rules.<ref name = "s1"></ref>

Knowledge-based techniques can be mainly classified into two categories: dictionary-based and corpus-based approaches.<ref name="s2"></ref> Dictionary-based approaches find opinion or [[emotion]] seed words in a [[dictionary]] and search for their [[synonym]]s and [[antonym]]s to expand the initial list of opinions or [[emotion]]s.<ref name = "s3">{{cite journal |last1=Madhoushi |first1=Zohreh |last2=Hamdan |first2=Abdul Razak |last3=Zainudin |first3=Suhaila |title=Sentiment analysis techniques in recent works - IEEE Conference Publication |journal=ieeexplore.ieee.org |date=2015 |doi=10.1109/SAI.2015.7237157 |url=https://ieeexplore.ieee.org/document/7237157/}}</ref> Corpus-based approaches on the other hand, start with a seed list of opinion or [[emotion]] words, and expand the database by finding other words with context-specific characteristics in a large [[corpus linguistics|corpus]].<ref name = "s3"></ref> While corpus-based approaches take into account context, their performance still vary in different domains since a word in one domain can have a different orientation in another domain.<ref>{{cite journal |last1=Hemmatian |first1=Fatemeh |last2=Sohrabi |first2=Mohammad Karim |title=A survey on classification techniques for opinion mining and sentiment analysis |journal=Artificial Intelligence Review |date=18 December 2017 |doi=10.1007/s10462-017-9599-6}}</ref>

====Statistical Methods====

Statistical methods commonly involve the use of different [[machine learning]] algorithms in which a large set of annotated data is fed into the algorithms for the system to learn and predict the appropriate [[emotion]] types.<ref name = "s1"></ref> This approach normally involves two sets of data: the [[Training, test, and validation sets|training set]] and the [[Training, test, and validation sets|testing set]], where the former is used to learn the attributes of the data, while the latter is used to validate the performance of the [[machine learning]] algorithm.<ref name = "s5">{{cite journal |last1=Sharef |first1=Nurfadhlina Mohd |last2=Zin |first2=Harnani Mat |last3=Nadali |first3=Samaneh |title=Overview and Future Opportunities of Sentiment Analysis Approaches for Big Data |journal=Journal of Computer Science |date=1 March 2016 |volume=12 |issue=3 |pages=153–168 |doi=10.3844/jcssp.2016.153.168}}</ref> [[Machine learning]] algorithms generally provide more reasonable classification accuracy compared to other approaches, but one of the challenges in achieving good results in the classification process, is the need to have a sufficiently large training set.<ref name="s1"></ref><ref name = "s5"></ref>

Some of the most commonly used [[machine learning]] algorithms include [[support vector machines| Support Vector Machines (SVM)]], [[naive bayes classifier|Naive Bayes]], and [[maximum entropy classifier|Maximum Entropy]].<ref name = "s4">{{cite journal |last1=Sun |first1=Shiliang |last2=Luo |first2=Chen |last3=Chen |first3=Junyu |title=A review of natural language processing techniques for opinion mining systems |journal=Information Fusion |date=July 2017 |volume=36 |pages=10–25 |doi=10.1016/j.inffus.2016.10.004}}</ref> [[Deep learning]], which is under the family of [[machine learning]], is also widely employed in [[emotion recognition]].<ref>{{cite journal |last1=Majumder |first1=Navonil |last2=Poria |first2=Soujanya |last3=Gelbukh |first3=Alexander |last4=Cambria |first4=Erik |title=Deep Learning-Based Document Modeling for Personality Detection from Text |journal=IEEE Intelligent Systems |date=March 2017 |volume=32 |issue=2 |pages=74–79 |doi=10.1109/MIS.2017.23}}</ref><ref>{{cite journal |last1=Mahendhiran |first1=P. D. |last2=Kannimuthu |first2=S. |title=Deep Learning Techniques for Polarity Classification in Multimodal Sentiment Analysis |journal=International Journal of Information Technology & Decision Making |date=May 2018 |volume=17 |issue=03 |pages=883–910 |doi=10.1142/S0219622018500128}}</ref><ref>{{cite journal |last1=Yu |first1=Hongliang |last2=Gui |first2=Liangke |last3=Madaio |first3=Michael |last4=Ogan |first4=Amy |last5=Cassell |first5=Justine |last6=Morency |first6=Louis-Philippe |title=Temporally Selective Attention Model for Social and Affective State Recognition in Multimedia Content |date=23 October 2017 |pages=1743–1751 |doi=10.1145/3123266.3123413 |url=https://dl.acm.org/citation.cfm?id=3123413 |publisher=ACM}}</ref> Well-known [[deep learning]] algorithms include different architectures of [[artificial neural network | Artificial Neural Network (ANN)]] such as [[convolutional neural network| Convolutional Neural Network (CNN)]], [[Long short-term memory | Long Short-term Memory (LSTM)]], and [[extreme learning machine| Extreme Learning Machine (ELM)]].<ref name = "s4"></ref> The popularity of [[deep learning]] approaches in the domain of [[emotion recognition]] maybe mainly attributed to its success in related applications such as in [[computer vision]], [[speech recognition]], and [[natural language processing | Natural Language Processing (NLP)]].<ref name = "s4"></ref>

====Hybrid Approaches====

Hybrid approaches in [[emotion recognition]] are essentially a combination of knowledge-based techniques and statistical methods, which exploit complementary characteristics from both techniques.<ref name = "s1"></ref> Some of the works that have applied an ensemble of knowledge-driven linguistic elements and statistical methods include sentic computing and iFeel, both of which have adopted the concept-level knowledge-based resource SenticNet.<ref>{{cite book |last1=Cambria |first1=Erik |last2=Hussain |first2=Amir |title=Sentic Computing: A Common-Sense-Based Framework for Concept-Level Sentiment Analysis |date=2015 |publisher=Springer Publishing Company, Incorporated |isbn=3319236539 |url=https://dl.acm.org/citation.cfm?id=2878632}}</ref><ref>{{cite journal |last1=Araújo |first1=Matheus |last2=Gonçalves |first2=Pollyanna |last3=Cha |first3=Meeyoung |last4=Benevenuto |first4=Fabrício |title=iFeel: a system that compares and combines sentiment analysis methods |date=7 April 2014 |pages=75–78 |doi=10.1145/2567948.2577013 |url=https://dl.acm.org/citation.cfm?id=2577013 |publisher=ACM}}</ref> The role of such knowledge-based resources in the implementation of hybrid approaches is highly important in the [[emotion]] classification process.<ref name = "s6"></ref> Since hybrid techniques gain from the benefits offered by both knowledge-based and statistical approaches, they tend to have better classification performance as opposed to employing knowledge-based or statistical methods independently.<ref name = "s2"></ref> A downside of using hybrid techniques however, is the computational complexity during the classification process.<ref name = "s6"></ref>

===Datasets===

Data is an integral part of the existing approaches in [[emotion recognition]] and in most cases it is a challenge to obtain annotated data that is necessary to train [[machine learning]] algorithms.<ref name = "s3"></ref> While most publicly available data are not annotated, there are existing annotated datasets available to perform [[emotion recognition]] research.<ref name = "s5"></ref> For the task of classifying different [[emotion]] types from multimodal sources in the form of texts, audio, or videos, the following datasets are available:

#HUMAINE: provides natural clips with emotion words and context labels in multiple modalities<ref>{{cite book |last1=editors |first1=Paolo Petta, Catherine Pelachaud, Roddy Cowie, |title=Emotion-oriented systems the humaine handbook |date=2011 |publisher=Springer |location=Berlin |isbn=978-3-642-15184-2}}</ref>
#Belfast database: provides clips with a wide range of emotions from TV programs and interview recordings<ref>{{cite journal |last1=Douglas-Cowie |first1=Ellen |last2=Campbell |first2=Nick |last3=Cowie |first3=Roddy |last4=Roach |first4=Peter |title=Emotional speech: towards a new generation of databases |journal=Speech Communication |date=1 April 2003 |volume=40 |issue=1-2 |pages=33–60 |doi=10.1016/S0167-6393(02)00070-5 |url=https://dl.acm.org/citation.cfm?id=772595 |issn=0167-6393}}</ref>
#SEMAINE: provides audiovisual recordings between a person and a [[Intelligent agent|virtual agent]] and contains [[emotion]] annotations such as angry, happy, fear, disgust, sadness, contempt, and amusement<ref>{{cite journal |last1=McKeown |first1=G. |last2=Valstar |first2=M. |last3=Cowie |first3=R. |last4=Pantic |first4=M. |last5=Schroder |first5=M. |title=The SEMAINE Database: Annotated Multimodal Records of Emotionally Colored Conversations between a Person and a Limited Agent |journal=IEEE Transactions on Affective Computing |date=January 2012 |volume=3 |issue=1 |pages=5–17 |doi=10.1109/T-AFFC.2011.20}}</ref>
#IEMOCAP: provides recordings of dyadic sessions between actors and contains [[emotion]] annotations such as happiness, anger, sadness, frustration, and neutral state <ref>{{cite journal |last1=Busso |first1=Carlos |last2=Bulut |first2=Murtaza |last3=Lee |first3=Chi-Chun |last4=Kazemzadeh |first4=Abe |last5=Mower |first5=Emily |last6=Kim |first6=Samuel |last7=Chang |first7=Jeannette N. |last8=Lee |first8=Sungbok |last9=Narayanan |first9=Shrikanth S. |title=IEMOCAP: interactive emotional dyadic motion capture database |journal=Language Resources and Evaluation |date=5 November 2008 |volume=42 |issue=4 |pages=335–359 |doi=10.1007/s10579-008-9076-6 |url=https://link.springer.com/article/10.1007/s10579-008-9076-6 |language=en |issn=1574-020X}}</ref>
#eNTERFACE: provides audiovisual recordings of subjects from seven nationalities and contains [[emotion]] annotations such as happiness, anger, sadness, surprise, disgust, and fear <ref>{{cite journal |last1=Martin |first1=O. |last2=Kotsia |first2=I. |last3=Macq |first3=B. |last4=Pitas |first4=I. |title=The eNTERFACE'05 Audio-Visual Emotion Database |date=3 April 2006 |pages=8 |doi=10.1109/ICDEW.2006.145 |url=https://dl.acm.org/citation.cfm?id=1130193 |publisher=IEEE Computer Society}}</ref>

===Applications===


The computer programmers often use [[Paul Ekman]]'s [[Facial Action Coding System]] as a guide.
The computer programmers often use [[Paul Ekman]]'s [[Facial Action Coding System]] as a guide.

Revision as of 09:09, 15 June 2018

Emotion recognition is the process of identifying human emotion, most typically from facial expressions as well as from verbal expressions. This is both something that humans do automatically but computational methodologies have also been developed.

Human

Humans show universal consistency in recognising emotions but also show a great deal of variability between individuals in their abilities. This has been a major topic of study in psychology.

Automatic

This process leverages techniques from multiple areas, such as signal processing, machine learning, and computer vision. The computers use different methods to interpret emotion such as Bayesian networks.[1] , Gaussian Mixture models[2] and Hidden Markov Models[3].

Approaches

The task of emotion recognition often involves the analysis of human expressions in multimodal forms such as texts, audio, or video.[4] Different emotion types are detected through the integration of information from facial expressions, body movement and gestures, and speech.[5] The existing approaches in emotion recognition to classify certain emotion types can be generally classified into three main categories: knowledge-based techniques, statistical methods, and hybrid approaches.[6]

Knowledge-based Techniques

Knowledge-based techniques (sometimes referred to as lexicon-based techniques), utilize domain knowledge and the semantic and syntactic characteristics of language in order to detect certain emotion types.[7] In this approach, it is common to use knowledge-based resources during the emotion classification process such as WordNet, SenticNet[8], ConceptNet, and EmotiNet[9], to name a few.[10] One of the advantages of this approach is the accessibility and economy brought about by the large availability of such knowledge-based resources.[6] A limitation of this technique on the other hand, is its inability to handle concept nuances and complex linguistic rules.[6]

Knowledge-based techniques can be mainly classified into two categories: dictionary-based and corpus-based approaches.[7] Dictionary-based approaches find opinion or emotion seed words in a dictionary and search for their synonyms and antonyms to expand the initial list of opinions or emotions.[11] Corpus-based approaches on the other hand, start with a seed list of opinion or emotion words, and expand the database by finding other words with context-specific characteristics in a large corpus.[11] While corpus-based approaches take into account context, their performance still vary in different domains since a word in one domain can have a different orientation in another domain.[12]

Statistical Methods

Statistical methods commonly involve the use of different machine learning algorithms in which a large set of annotated data is fed into the algorithms for the system to learn and predict the appropriate emotion types.[6] This approach normally involves two sets of data: the training set and the testing set, where the former is used to learn the attributes of the data, while the latter is used to validate the performance of the machine learning algorithm.[13] Machine learning algorithms generally provide more reasonable classification accuracy compared to other approaches, but one of the challenges in achieving good results in the classification process, is the need to have a sufficiently large training set.[6][13]

Some of the most commonly used machine learning algorithms include Support Vector Machines (SVM), Naive Bayes, and Maximum Entropy.[14] Deep learning, which is under the family of machine learning, is also widely employed in emotion recognition.[15][16][17] Well-known deep learning algorithms include different architectures of Artificial Neural Network (ANN) such as Convolutional Neural Network (CNN), Long Short-term Memory (LSTM), and Extreme Learning Machine (ELM).[14] The popularity of deep learning approaches in the domain of emotion recognition maybe mainly attributed to its success in related applications such as in computer vision, speech recognition, and Natural Language Processing (NLP).[14]

Hybrid Approaches

Hybrid approaches in emotion recognition are essentially a combination of knowledge-based techniques and statistical methods, which exploit complementary characteristics from both techniques.[6] Some of the works that have applied an ensemble of knowledge-driven linguistic elements and statistical methods include sentic computing and iFeel, both of which have adopted the concept-level knowledge-based resource SenticNet.[18][19] The role of such knowledge-based resources in the implementation of hybrid approaches is highly important in the emotion classification process.[10] Since hybrid techniques gain from the benefits offered by both knowledge-based and statistical approaches, they tend to have better classification performance as opposed to employing knowledge-based or statistical methods independently.[7] A downside of using hybrid techniques however, is the computational complexity during the classification process.[10]

Datasets

Data is an integral part of the existing approaches in emotion recognition and in most cases it is a challenge to obtain annotated data that is necessary to train machine learning algorithms.[11] While most publicly available data are not annotated, there are existing annotated datasets available to perform emotion recognition research.[13] For the task of classifying different emotion types from multimodal sources in the form of texts, audio, or videos, the following datasets are available:

  1. HUMAINE: provides natural clips with emotion words and context labels in multiple modalities[20]
  2. Belfast database: provides clips with a wide range of emotions from TV programs and interview recordings[21]
  3. SEMAINE: provides audiovisual recordings between a person and a virtual agent and contains emotion annotations such as angry, happy, fear, disgust, sadness, contempt, and amusement[22]
  4. IEMOCAP: provides recordings of dyadic sessions between actors and contains emotion annotations such as happiness, anger, sadness, frustration, and neutral state [23]
  5. eNTERFACE: provides audiovisual recordings of subjects from seven nationalities and contains emotion annotations such as happiness, anger, sadness, surprise, disgust, and fear [24]

Applications

The computer programmers often use Paul Ekman's Facial Action Coding System as a guide.

Emotion recognition is used for a variety of reasons. Affectiva uses it to help advertisers and content creators to sell their products more effectively.[25] Affectiva also makes a Q-sensor that gauges the emotions of autistic children. Emotient was a startup company which utilized artificial intelligence to predict "attitudes and actions based on facial expressions".[26] Apple indicated its intention to buy Emotient in January 2016.[26] nViso provides real-time emotion recognition for web and mobile applications through a real-time API.[27] Visage Technologies AB offers emotion estimation as a part of their Visage SDK for marketing and scientific research and similar purposes.[28] Eyeris is an emotion recognition company that works with embedded system manufacturers including car makers and social robotic companies on integrating its face analytics and emotion recognition software; as well as with video content creators to help them measure the perceived effectiveness of their short and long form video creative.[29][30] Emotion recognition and emotion analysis are being studied by companies and universities around the world.

See also

References

  1. ^ Miyakoshi, Yoshihiro, and Shohei Kato. "Facial Emotion Detection Considering Partial Occlusion Of Face Using Baysian Network". Computers and Informatics (2011): 96–101.
  2. ^ Hari Krishna Vydana, P. Phani Kumar, K. Sri Rama Krishna and Anil Kumar Vuppala. "Improved emotion recognition using GMM-UBMs". 2015 International Conference on Signal Processing and Communication Engineering Systems
  3. ^ B. Schuller, G. Rigoll M. Lang. "Hidden Markov model-based speech emotion recognition". ICME '03. Proceedings. 2003 International Conference on Multimedia and Expo, 2003.
  4. ^ Poria, Soujanya; Cambria, Erik; Bajpai, Rajiv; Hussain, Amir (September 2017). "A review of affective computing: From unimodal analysis to multimodal fusion". Information Fusion. 37: 98–125. doi:10.1016/j.inffus.2017.02.003.
  5. ^ Caridakis, George; Castellano, Ginevra; Kessous, Loic; Raouzaiou, Amaryllis; Malatesta, Lori; Asteriadis, Stelios; Karpouzis, Kostas (19 September 2007). "Multimodal emotion recognition from expressive faces, body gestures and speech". IFIP The International Federation for Information Processing. Springer US: 375–388. doi:10.1007/978-0-387-74161-1_41.
  6. ^ a b c d e f Cambria, Erik (March 2016). "Affective Computing and Sentiment Analysis". IEEE Intelligent Systems. 31 (2): 102–107. doi:10.1109/MIS.2016.31.
  7. ^ a b c Rani, Meesala Shobha; S, Sumathy (26 September 2017). "Perspectives of the performance metrics in lexicon and hybrid based approaches: a review". International Journal of Engineering & Technology. 6 (4): 108. doi:10.14419/ijet.v6i4.8295.
  8. ^ Cambria, Erik; Poria, Soujanya; Bajpai, Rajiv; Schuller, Bjoern (2016). "SenticNet 4: A Semantic Resource for Sentiment Analysis Based on Conceptual Primitives". Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers.
  9. ^ Balahur, Alexandra; Hermida, JesúS M.; Montoyo, AndréS (1 November 2012). "Detecting implicit expressions of emotion in text: A comparative analysis". Decision Support Systems. 53 (4): 742–753. doi:10.1016/j.dss.2012.05.024. ISSN 0167-9236.
  10. ^ a b c Medhat, Walaa; Hassan, Ahmed; Korashy, Hoda (December 2014). "Sentiment analysis algorithms and applications: A survey". Ain Shams Engineering Journal. 5 (4): 1093–1113. doi:10.1016/j.asej.2014.04.011.
  11. ^ a b c Madhoushi, Zohreh; Hamdan, Abdul Razak; Zainudin, Suhaila (2015). "Sentiment analysis techniques in recent works - IEEE Conference Publication". ieeexplore.ieee.org. doi:10.1109/SAI.2015.7237157.
  12. ^ Hemmatian, Fatemeh; Sohrabi, Mohammad Karim (18 December 2017). "A survey on classification techniques for opinion mining and sentiment analysis". Artificial Intelligence Review. doi:10.1007/s10462-017-9599-6.
  13. ^ a b c Sharef, Nurfadhlina Mohd; Zin, Harnani Mat; Nadali, Samaneh (1 March 2016). "Overview and Future Opportunities of Sentiment Analysis Approaches for Big Data". Journal of Computer Science. 12 (3): 153–168. doi:10.3844/jcssp.2016.153.168.
  14. ^ a b c Sun, Shiliang; Luo, Chen; Chen, Junyu (July 2017). "A review of natural language processing techniques for opinion mining systems". Information Fusion. 36: 10–25. doi:10.1016/j.inffus.2016.10.004.
  15. ^ Majumder, Navonil; Poria, Soujanya; Gelbukh, Alexander; Cambria, Erik (March 2017). "Deep Learning-Based Document Modeling for Personality Detection from Text". IEEE Intelligent Systems. 32 (2): 74–79. doi:10.1109/MIS.2017.23.
  16. ^ Mahendhiran, P. D.; Kannimuthu, S. (May 2018). "Deep Learning Techniques for Polarity Classification in Multimodal Sentiment Analysis". International Journal of Information Technology & Decision Making. 17 (03): 883–910. doi:10.1142/S0219622018500128.
  17. ^ Yu, Hongliang; Gui, Liangke; Madaio, Michael; Ogan, Amy; Cassell, Justine; Morency, Louis-Philippe (23 October 2017). "Temporally Selective Attention Model for Social and Affective State Recognition in Multimedia Content". ACM: 1743–1751. doi:10.1145/3123266.3123413. {{cite journal}}: Cite journal requires |journal= (help)
  18. ^ Cambria, Erik; Hussain, Amir (2015). Sentic Computing: A Common-Sense-Based Framework for Concept-Level Sentiment Analysis. Springer Publishing Company, Incorporated. ISBN 3319236539.
  19. ^ Araújo, Matheus; Gonçalves, Pollyanna; Cha, Meeyoung; Benevenuto, Fabrício (7 April 2014). "iFeel: a system that compares and combines sentiment analysis methods". ACM: 75–78. doi:10.1145/2567948.2577013. {{cite journal}}: Cite journal requires |journal= (help)
  20. ^ editors, Paolo Petta, Catherine Pelachaud, Roddy Cowie, (2011). Emotion-oriented systems the humaine handbook. Berlin: Springer. ISBN 978-3-642-15184-2. {{cite book}}: |last1= has generic name (help)CS1 maint: extra punctuation (link) CS1 maint: multiple names: authors list (link)
  21. ^ Douglas-Cowie, Ellen; Campbell, Nick; Cowie, Roddy; Roach, Peter (1 April 2003). "Emotional speech: towards a new generation of databases". Speech Communication. 40 (1–2): 33–60. doi:10.1016/S0167-6393(02)00070-5. ISSN 0167-6393.
  22. ^ McKeown, G.; Valstar, M.; Cowie, R.; Pantic, M.; Schroder, M. (January 2012). "The SEMAINE Database: Annotated Multimodal Records of Emotionally Colored Conversations between a Person and a Limited Agent". IEEE Transactions on Affective Computing. 3 (1): 5–17. doi:10.1109/T-AFFC.2011.20.
  23. ^ Busso, Carlos; Bulut, Murtaza; Lee, Chi-Chun; Kazemzadeh, Abe; Mower, Emily; Kim, Samuel; Chang, Jeannette N.; Lee, Sungbok; Narayanan, Shrikanth S. (5 November 2008). "IEMOCAP: interactive emotional dyadic motion capture database". Language Resources and Evaluation. 42 (4): 335–359. doi:10.1007/s10579-008-9076-6. ISSN 1574-020X.
  24. ^ Martin, O.; Kotsia, I.; Macq, B.; Pitas, I. (3 April 2006). "The eNTERFACE'05 Audio-Visual Emotion Database". IEEE Computer Society: 8. doi:10.1109/ICDEW.2006.145. {{cite journal}}: Cite journal requires |journal= (help)
  25. ^ "Affectiva".
  26. ^ a b DeMuth Jr., Chris (8 January 2016). "Apple Reads Your Mind". M&A Daily. Seeking Alpha. Retrieved 9 January 2016.
  27. ^ "nViso". nViso.ch.
  28. ^ "Visage Technologies".
  29. ^ "Feeling sad, angry? Your future car will know".
  30. ^ "Cars May Soon Warn Drivers Before They Nod Off".