Jump to content

Nadine Social Robot

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Shawn Tan Tai Chi (talk | contribs) at 00:50, 17 August 2016. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Nadine
Year of creation2013

Nadine is a realistic female humanoid social robot designed by the Institute for Media Innovation department of Nanyang Technological University and modelled on its director Professor Nadia Magnenat Thalmann. The robot has a strong human-likeness with a natural-looking skin and hair[1][2] and realistic hands.[3][4][5][6][7][8] Nadine is a socially intelligent robot who is friendly, greets you back, makes eye contact, and remembers all the conversations you had with her. She is able to answer questions in several languages, show emotions both in her gestures and in her face depending on the content of the interaction with the user.[citation needed] Nadine can recognize people she has previously met and engage in flowing conversation.[9][10][11][12] Nadine is also fitted with a personality, meaning her mood can sour depending on what you say to her. Nadine has a total of 27 degrees of freedom for facial expressions and upper body movements. She can recognise anybody she has met, and remembers facts and events related to each person. She can assist people with special needs by reading stories, showing images, put on Skype sessions, send emails, and communicate with the family.[13][14][15][16] She can play the role of a personal, private coach always available when nobody is there.[17][18]

Platform

Nadine’s platform is implemented as a classic Perception-Decision-Action architecture. The perception layer is composed of a Microsoft Kinect V2 and a microphone. The perception includes face recognition, gestures recognition and some understanding of social situations. In regards to decision, our platform includes emotion and memory models as well as social attention. Finally, the action layer consists of a dedicated robot controller which includes emotional expressions, lips synchronization and online gaze generation.

Specifications

Nadine
Weight 35 kg
Sitting Height 131.5 cm
Degrees of Freedom 27
Rated input voltage/frequency AC100 - 240V
Power consumption approx. 500W

References

  1. ^ Y. Xiao, et al, Body Movement Analysis and Recognition, Context Aware Human-Robot and Human-Agent Interaction, Springer International Publishing, 31-53, 2015
  2. ^ Z. Zhang, A. Beck, and N. Magnenat Thalmann, Human-Like Behavior Generation Based on Head-Arms Model for Robot Tracking External Targets and Body Parts, IEEE Transaction on Cybernetics, Vol. 45, No. 8, Pp. 1390-1400, 2015
  3. ^ H. Liang, J. Yuan, D. Thalmann and N. Magnenat Thalmann, AR in Hand: Egocentric Palm Pose Tracking and Gesture Recognition for Augmented Reality Applications, ACM Multimedia Conference 2015 (ACMMM 2015), Brisbane, Australia, 2015
  4. ^ H. Liang, J. Yuan and D. Thalmann, Egocentric Hand Pose Estimation and Distance Recovery in a Single RGB Image, IEEE International Conference on Multimedia and Expo (ICME 2015), Italy, 2015
  5. ^ H. Liang, J. Yuan and D. Thalmann, Resolving Ambiguous Hand Pose Predictions by Exploiting Part Correlations, IEEE Transactions on Circuits and Systems for Video Technology, Pp. 1, Issue 99, 2014
  6. ^ H. Liang and J. Yuan, Hand Parsing and Gesture Recognition with a Commodity Depth Camera, Computer Vision and Machine Learning with RGB-D Sensors, Springer, Pp. 239-265, 2014
  7. ^ H. Liang, J. Yuan and D. Thalmann, Model-based Hand Pose Estimation via Spatial-temporal Hand Parsing and 3D Fingertip Localization, The Visual Computer: International Journal of Computer Graphics, Vol. 29, Issue 6-8, Pp. 837-848, 2013
  8. ^ H. Liang, J. Yuan and D. Thalmann, Hand Pose Estimation by Combining Fingertip Tracking and Articulated ICP, 11th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry (VRCAI 2012), Singapore, 2012
  9. ^ J Ren, X Jiang and J Yuan, Quantized Fuzzy LBP for Face Recognition, 40th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2015, Brisbane, Australia, 2015
  10. ^ J. Ren, X. Jiang and J. Yuan, Learning LBP Structure by Maximizing the Conditional Mutual Information, Pattern Recognition, Pp. 3180–3190, Vol. 48, Issue 10, 2015
  11. ^ J. Ren, X. Jiang and J. Yuan, A Chi-Squared-Transformed Subspace of LBP Histogram for Visual Recognition, IEEE Transactions on Image Processing, Vol. 24, Issue 6, Pp 1893 - 1904, 2015
  12. ^ J. Ren, X. Jiang, J. Yuan and G. Wang, Optimizing LBP Structure For Visual Recognition Using Binary Quadratic Programming, IEEE Signal Processing Letters, Pp. 1346 – 1350, 2014
  13. ^ A. Beck, Z. Zhang and N. Magnenat Thalmann, Motion Control for Social Behaviors, Context Aware Human-Robot and Human-Agent Interaction, Springer International Publishing, 237-256, 2015
  14. ^ Z.P. Bian, J. Hou, L.P. Chau and N. Magnenat Thalmann, Fall Detection Based on Body Part Tracking Using a Depth Camera, IEEE Journal of Biomedical and Health Informatics, Vol. 19, No. 2, Pp. 430-439, 2015
  15. ^ J. Zhang, J. Zheng and N. Magnenat Thalmann, Fall Detection Based on Body Part Tracking Using a Depth Camera PCMD: Personality-Characterized Mood Dynamics Model Towards Personalized Virtual Characters, Computer Animation and Virtual Worlds, 26(3-4): 237-245, 2015
  16. ^ J. Zhang, J. Zheng and N. Magnenat Thalmann, Modeling Personality, Mood, and Emotions, Context Aware Human-Robot and Human-Agent Interaction, Springer International Publishing, 211-236, 2015
  17. ^ Y. Xiao, Z. Zhang, A. Beck, J. Yuan and D. Thalmann, Human-Robot Interaction by Understanding Upper Body Gestures, MIT Press Journals - Presence: Teleoperators and Virtual Environments, Vol. 23, No. 2, Pp. 133-154, 2014
  18. ^ Z. Yumak, J. Ren, N. Magnenat Thalmann, and J. Yuan, Modelling Multi-Party Interactions among Virtual Characters, Robots, and Humans, MIT Press Journals - Presence: Teleoperators and Virtual Environments, Vol. 23, No. 2, Pp. 172-190, 2014