Jump to content

Leonardo (robot): Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Unlinked: Embodiment
Expanding the stub for a class assignment.
Line 1: Line 1:
'''Leonardo''' is a 2.5 foot [[social robot]], the first<ref>{{cite web |url=http://www.pbs.org/newshour/multimedia/medialab/2.html |title=Furry Robots, Foldable Cars and More Innovations from MIT’s Media Lab |publisher=PBS |date=2011-05-20}}</ref> created by the Personal Robots Group of the [[Massachusetts Institute of Technology]]. Its development is credited to [[Cynthia Breazeal]]. The body is by [[Stan Winston]] Studios, leaders in animatronics.<ref name="Leonardo Project Home Page"> {{cite web |url=http://robotic.media.mit.edu/projects/robots/leonardo/overview/overview.html |title=Leonardo Project Home Page |publisher=MIT Media Lab Personal Robots Group |accessdate=2012-02-27}}</ref> Its body was completed in 2002.<ref name="The 50 Best Robots Ever">{{cite web |url=http://www.wired.com/wired/archive/14.01/robots.html |title=The 50 Best Robots Ever |publisher=Wired Magazine |date=2006-01 |author=Robert Capps}}</ref> It was the most complex robot the studio had ever attempted as of 2001.<ref name="Leonardo Project Body Page"> {{cite web |url=http://robotic.media.mit.edu/projects/robots/leonardo/body/body.html |title=Leonardo Project Body Page |publisher=MIT Media Lab Personal Robots Group |accessdate=2012-02-27}}</ref> Other contributors to the project include NevenVision, Inc., Toyota, NASA’s [[Lyndon B. Johnson Space Center]], and the Navy Research Lab. It was created to facilitate the study of human-robot interaction and collaboration. A [[DARPA]] Mobile Autonomous Robot Software (MARS) grant, [[Office of Naval Research]] Young Investigators Program grant, ''Digital Life'', and ''Things that Think'' consortia have partially funded the project. The [[MIT Media Lab]] Robotic Life Group, who also studied [[Robonaut]] 1, set out to create a more sophisticated social-robot in Leonardo. They gave Leonardo a different visual tracking system and programs based on infant psychology that they hope will make for better human-robot collaboration. One of the goals of the project was to make it possible for untrained humans to interact with and teach the robot much more quickly with fewer repetitions. Leonardo was awarded a spot in ''Wired Magazine’s'' 50 Best Robots Ever list in 2006.<ref name="The 50 Best Robots Ever"/>
'''Leonardo''' is a [[robot]] developed by Professor [[Cynthia Breazeal]] of the [[Massachusetts Institute of Technology]] [[Media Lab]] in conjunction with [[Stan Winston Studio]] and [[DARPA]]. Physically it appears to be [[anthropomorphic]], covered in synthetic fur and having a vaguely humanoid body about two and a half feet tall. The robot has a highly mobile face and arms, but cannot walk. Its purpose is to serve as a "sociable robot" capable of emotional expression, vision, and "socially guided learning." Breazeal believes that the embodiment of Leonardo makes it more capable of forming emotional attachment with humans: "Breazeal has conducted -experiments demonstrating that individuals have a deeper, more intense emotional reaction to Leonardo than to a high-resolution two-dimensional animation of Leonardo on a computer screen."


==See also==
==Construction==

{{Portal|Robotics}}
There are approximately sixty motors in the small space of the robot body that make the expressive movement of the robot possible. The Personal Robot Group developed the motor control systems (with both 8-axis and 16-axis control packages) that they’ve used for Leonardo. Leonardo does not resemble any real creature, but instead has the appearance of a fanciful being. Its face was designed to be expressive and communicative since it is a social robot. The fanciful, purposefully young look is supposed to encourage humans to interact with it in the same way they would with a child or pet.<ref name="Leonardo Project Body Page"/>
* [[Kismet (robot)]]

A camera mounted in the robot’s right eye captures faces. A facial feature tracker developed by the Neven Vision corporation isolates the faces from the captures. A buffer of up to 200 views of the face is used to create a model of the person whenever they introduce themself via speech. Additionally, Leonardo can track objects and faces visually using a collection of visual feature detectors that include color, skin tone, shape, and motion. <ref>{{cite web |url= http://robotic.media.mit.edu/projects/robots/leonardo/vision/vision.html |title=”Leonardo Project Vision Page” |publisher=MIT Media Lab Personal Robots Group |accessdate=2012-02-27}}</ref>

The group plans that Leonardo will have skin that can detect temperature, proximity, and pressure. To accomplish this, they are experimenting with [[force-sensing resistor]]s and [[quantum tunnelling composite]]s. The sensors are layered over with silicon like is used in makeup effects to maintain the aesthetics of the robot. <ref>{{cite web |url=http://robotic.media.mit.edu/projects/robots/leonardo/skin/skin.html |title=”Leonardo Project Skin Page” |publisher=MIT Media Lab Personal Robots Group |accessdate=2012-02-27}}</ref>

==Purpose==

The goal of creating Leonardo was to make a social robot. Its motors, sensors, and cameras allow it to mimic human expression, interact with limited objects, and track objects. This helps humans react to the robot in a more familiar way. Through this reaction, humans can engage the robot in more naturally social ways. Leonardo’s programming blends with psychological theory so that he learns more naturally, interacts more naturally, and collaborates more naturally with humans.

===Learning===

Leonardo learns through spatial scaffolding. One of the ways a teacher teaches is by positioning objects near to the student that they expect the student to use. This same technique, spatial scaffolding, can be used with Leonardo, who is taught to build a sailboat from virtual blocks, using only the red and blue blocks. Whenever it tries to use a green block, the teacher pulls the “forbidden” color away and moves the red and blue blocks into the robot’s space. Leonardo learns, in this way, to build the boat using red and blue blocks only. <ref>{{cite web |url=http://robotic.media.mit.edu/projects/robots/leonardo/sociallearning/sociallearning.html |title=Leonardo Project Social Learning Page |publisher=MIT Media Lab Personal Robots Group |accessdate=2012-02-27}}</ref>

Leonardo can also track what a human is looking at. This allows the robot to interact with a human and objects in the environment. Naturally, humans will follow a pointing gesture and/or gaze and understand that what is being pointed at or looked at is the object the other human is concerned with and about to discuss or do something with. The Personal Robots Group has used Leonardo’s tracking ability and programmed the robot so it can act human-like, bringing its gaze to an object the human is paying attention to. Matching the human’s gaze is one way Leonardo seems to exhibit more natural behavior.<ref>{{cite journal |title=Working with Robots and Objects: Revisiting Deictic Reference for Achieving Spatial Common Ground |author=Andrew Brooks and Cynthia Breazeal |year=2006 |publisher=''Human Robot Interaction'' |city=Salt Lake City}}</ref> Sharing attention like this is one of the ways that allows the robot to learn from a human. The robot’s expressions, being able to give feedback on its “understanding” is also vital.

Another way that Leo learns is by mimicry. The same way infants learn to understand and manipulate their world is helpful for the social robot. By mimicking human facial expressions and body movement, Leo can distinguish between self and other. This ability is important for humans in taking each others’ perspectives, and it’s the same for a social robot. Being able to understand that “others” don’t have the same knowledge it has lets the robot view its environment more accurately and make better decisions based in its programming of what to do in a given situation. It also allows the robot to distinguish between a human’s intentions and their actual actions, since humans are not exact. This would allow a human without special training to teach the robot. <ref>{{cite web |url=http://robotic.media.mit.edu/projects/robots/leonardo/sociallearning/sociallearning.html |title=”Leonardo Project Social Learning Page” |publisher=MIT Media Lab Personal Robots Group |accessdate=2012-02-27}}</ref>

Leonardo can explore on its own, in addition to being trained with a human, which saves time and is a key factor in the success of a personal robot. It must be able to learn quickly using the mechanisms humans already use (like spatial scaffolding, shared attention, mimicry, and perspective taking). It also cannot require an extensive amount of time. And finally, it should be a pleasure to interact with, which is why aesthetics and expression are so important. These are all important steps in bringing the robot into a home.

===Interacting===

Shared attention and perspective taking are two mechanisms Leonardo has access to that help it interact naturally with humans. Leonardo also can achieve something like [[empathy]], however, by examining the data it gets from mimicking human facial expressions, body language, and speech. In a similar way, humans can understand what other humans might be feeling based on the same data, Leonardo has been programmed according to the rules of simulation theory, allowing it to render something like empathy.<ref>{{cite web |url=http://robotic.media.mit.edu/projects/robots/leonardo/socialcog/socialcog.html |title=Leonardo Project Social Cognition Page |publisher=MIT Media Lab Personal Robots Group |accessdate=2012-02-27}}</ref> In these ways, social interaction with Leonardo seems more human-like, making it more likely humans will be able to work with the robot in a team.

===Collaborating===

Leonardo can work together with a human to solve a common problem as much as his body allows. He’s more effective at working shoulder-to-shoulder with a human because of the theory of mind work that is blended with his programming. In a task where one human wants cookies and another crackers from two locked locations and one of them has switched the locations, Leonardo can watch the first human trying to get to where he thinks the cookies are and open a box with cookies, helping him achieve his goal. All of Leonardo’s social skills work together so it can work alongside humans. When a human asks it to do a task, it can indicate what it knows or doesn’t know and what it can and cannot do. Communicating through expression and gesture and through perceiving expression, gesture, and speech, the robot is able to work as part of a team.<ref>{{cite web |url=http://robotic.media.mit.edu/projects/robots/leonardo/teamwork/teamwork.html |title=Project Leonardo Teamwork Page |publisher=MIT Media Lab Personal Robots Group |accessdate=2012-02-27}}</ref>

==Contributors==

* Professor [[Cynthia Breazeal]]
* [[Stan Winston]]
* Lindsay MacGowan (Artistic Lead)
* Richard Landon (Technical Lead)
* The Stan Winston Studios Team
** Jon Dawe
** Trevor Hensley
** Matt Heimlich
** Al Sousa
** Kathy Macgowan
** Michael Ornealez
** Amy Whetsel
** Joe Reader
** Grady Holder
** Rob Ramsdell
** John Cherevka
** Rodrick Khachatoorian
** Kurt Herbel
** Rich Haugen
** Keith Marbory
** Annabelle Troukins
* Fardad Faridi (Animator)
* Graduate Students
** Matt Berlin
** Andrew “Zoz” Brooks
** Jesse Gray
** Guy Hoffman
** [[Jeff Lieberman (roboticist) |Jeff Lieberman]]
** Andrea Lockerd Thomaz
** Dan Stiehl
* Matt Hancher (Alumni)
* Hans Lee (Alumni)


==External links==
==External links==
* [http://robotic.media.mit.edu/projects/robots/leonardo/overview/overview.html Personal Robots Group] (Leonardo home page)
* [http://robotic.media.mit.edu/projects/robots/leonardo/overview/overview.html Personal Robots Group] (Leonardo home page)
* [http://www.ted.com/talks/cynthia_breazeal_the_rise_of_personal_robots.html TED Talk: Cynthia Breazeal]
* [http://www.nytimes.com/2007/07/29/magazine/29robots-t.html?pagewanted=all The Real transformers] (New York Times article)


==Further Reading==
{{robot-stub}}
*

==References==
{{reflist}}

==Bibliography==
* [http://robotic.media.mit.edu/pdfs/conferences/StiehlBreazeal-IROS-06.pdf A “Sensitive Skin” for Robotic Companions Featuring Temperature, Force, and Electric Field Sensors]
* [http://robotic.media.mit.edu/pdfs/conferences/Stiehl_Icra04.pdf A “Somatic Alphabet” Approach to “Sensitive Skin”]
* [http://www.cc.gatech.edu/~athomaz/papers/0145.pdf Action parsing and goal inference using self as simulator]
* [http://web.media.mit.edu/~guy/publications/Gray2009IJRR.pdf An Embodied Cognition Approach to Mindreading Skills for Socially Intelligent Robots]
* [http://robotic.media.mit.edu/pdfs/conferences/Lockerd_etal_RoMan-05.pdf An Embodied Computational Model of Social Referencing]
* [http://web.media.mit.edu/~wdstiehl/Publications/Stiehl_Iros04.pdf Applying a “Somatic Alphabet” Approach to Inferring Orientation, Motion, and Direction in Clusters of Force Sensing Resistors]
* [http://web.media.mit.edu/~guy/publications/HoffmanAIAA04.pdf Collaboration in Human-Robot Teams]
* [http://www.ai.rug.nl/~gert/download/articles/breazeal04imitation.pdf Learning From and About Others: Towards Using Imitation to Bootstrap the Social Understanding of Others by Robots]
* [http://www.cc.gatech.edu/~athomaz/papers/BreazealThomaz-ICRA08-final.pdf Learning from Human Teachers with Socially Guided Exploration]
* [http://web.media.mit.edu/~alockerd/papers/PerspectiveTaking-AAAI06.pdf Perspective Taking: An Organizing Principle for Learning in Human-Robot Interaction]
* [http://robotic.media.mit.edu/pdfs/conferences/ThomazBreazeal-ICDL-07.pdf Robot Learning via Socially Guided Exploration]
* [http://www.androidscience.com/proceedings2005/ThomazCogSci2005AS.pdf Robot Science Meets Social Science: An Embodied Model of Social Referencing]
* [http://robotic.media.mit.edu/pdfs/journals/Brooks-etal-ACE-04.pdf Robot’s Play: Interactive Games With Sociable Machines]
* [https://www.aaai.org/Papers/AAAI/2008/AAAI08-201.pdf Spatial Scaffolding for Sociable Robot Learning]
* [http://jmvidal.cse.sc.edu/library/AAMAS-04/proceedings/125_LockerdA_Tasks.pdf Teaching and Working with Robots as a Collaboration]
* [http://cs.ou.edu/~fagg/classes/embedded_systems_2009/papers/SmithBreazeal.pdf The dynamic lift of developmental process]
* [http://robotic.media.mit.edu/pdfs/conferences/Lockerd-Iros04.pdf Tutelage and Socially Guided Robot Learning]
* [http://robotic.media.mit.edu/pdfs/theses/mattb-phd.pdf Understanding the Embodied Teacher: Nonverbal Cues for Sociable Robot Learning]
* [http://www.cc.gatech.edu/~athomaz/papers/Breazeal-Humanoids04.pdf Working Collaboratively with Humanoid Robots]

==See also==
{{Portal|Robotics}}
* [[Kismet (robot)]]
* [[Robonaut]]
* [[Social robot]]
* [[Robotics]]


[[Category:Biomorphic robots]]
[[Category:Biomorphic robots]]

Revision as of 18:23, 1 March 2012

Leonardo is a 2.5 foot social robot, the first[1] created by the Personal Robots Group of the Massachusetts Institute of Technology. Its development is credited to Cynthia Breazeal. The body is by Stan Winston Studios, leaders in animatronics.[2] Its body was completed in 2002.[3] It was the most complex robot the studio had ever attempted as of 2001.[4] Other contributors to the project include NevenVision, Inc., Toyota, NASA’s Lyndon B. Johnson Space Center, and the Navy Research Lab. It was created to facilitate the study of human-robot interaction and collaboration. A DARPA Mobile Autonomous Robot Software (MARS) grant, Office of Naval Research Young Investigators Program grant, Digital Life, and Things that Think consortia have partially funded the project. The MIT Media Lab Robotic Life Group, who also studied Robonaut 1, set out to create a more sophisticated social-robot in Leonardo. They gave Leonardo a different visual tracking system and programs based on infant psychology that they hope will make for better human-robot collaboration. One of the goals of the project was to make it possible for untrained humans to interact with and teach the robot much more quickly with fewer repetitions. Leonardo was awarded a spot in Wired Magazine’s 50 Best Robots Ever list in 2006.[3]

Construction

There are approximately sixty motors in the small space of the robot body that make the expressive movement of the robot possible. The Personal Robot Group developed the motor control systems (with both 8-axis and 16-axis control packages) that they’ve used for Leonardo. Leonardo does not resemble any real creature, but instead has the appearance of a fanciful being. Its face was designed to be expressive and communicative since it is a social robot. The fanciful, purposefully young look is supposed to encourage humans to interact with it in the same way they would with a child or pet.[4]

A camera mounted in the robot’s right eye captures faces. A facial feature tracker developed by the Neven Vision corporation isolates the faces from the captures. A buffer of up to 200 views of the face is used to create a model of the person whenever they introduce themself via speech. Additionally, Leonardo can track objects and faces visually using a collection of visual feature detectors that include color, skin tone, shape, and motion. [5]

The group plans that Leonardo will have skin that can detect temperature, proximity, and pressure. To accomplish this, they are experimenting with force-sensing resistors and quantum tunnelling composites. The sensors are layered over with silicon like is used in makeup effects to maintain the aesthetics of the robot. [6]

Purpose

The goal of creating Leonardo was to make a social robot. Its motors, sensors, and cameras allow it to mimic human expression, interact with limited objects, and track objects. This helps humans react to the robot in a more familiar way. Through this reaction, humans can engage the robot in more naturally social ways. Leonardo’s programming blends with psychological theory so that he learns more naturally, interacts more naturally, and collaborates more naturally with humans.

Learning

Leonardo learns through spatial scaffolding. One of the ways a teacher teaches is by positioning objects near to the student that they expect the student to use. This same technique, spatial scaffolding, can be used with Leonardo, who is taught to build a sailboat from virtual blocks, using only the red and blue blocks. Whenever it tries to use a green block, the teacher pulls the “forbidden” color away and moves the red and blue blocks into the robot’s space. Leonardo learns, in this way, to build the boat using red and blue blocks only. [7]

Leonardo can also track what a human is looking at. This allows the robot to interact with a human and objects in the environment. Naturally, humans will follow a pointing gesture and/or gaze and understand that what is being pointed at or looked at is the object the other human is concerned with and about to discuss or do something with. The Personal Robots Group has used Leonardo’s tracking ability and programmed the robot so it can act human-like, bringing its gaze to an object the human is paying attention to. Matching the human’s gaze is one way Leonardo seems to exhibit more natural behavior.[8] Sharing attention like this is one of the ways that allows the robot to learn from a human. The robot’s expressions, being able to give feedback on its “understanding” is also vital.

Another way that Leo learns is by mimicry. The same way infants learn to understand and manipulate their world is helpful for the social robot. By mimicking human facial expressions and body movement, Leo can distinguish between self and other. This ability is important for humans in taking each others’ perspectives, and it’s the same for a social robot. Being able to understand that “others” don’t have the same knowledge it has lets the robot view its environment more accurately and make better decisions based in its programming of what to do in a given situation. It also allows the robot to distinguish between a human’s intentions and their actual actions, since humans are not exact. This would allow a human without special training to teach the robot. [9]

Leonardo can explore on its own, in addition to being trained with a human, which saves time and is a key factor in the success of a personal robot. It must be able to learn quickly using the mechanisms humans already use (like spatial scaffolding, shared attention, mimicry, and perspective taking). It also cannot require an extensive amount of time. And finally, it should be a pleasure to interact with, which is why aesthetics and expression are so important. These are all important steps in bringing the robot into a home.

Interacting

Shared attention and perspective taking are two mechanisms Leonardo has access to that help it interact naturally with humans. Leonardo also can achieve something like empathy, however, by examining the data it gets from mimicking human facial expressions, body language, and speech. In a similar way, humans can understand what other humans might be feeling based on the same data, Leonardo has been programmed according to the rules of simulation theory, allowing it to render something like empathy.[10] In these ways, social interaction with Leonardo seems more human-like, making it more likely humans will be able to work with the robot in a team.

Collaborating

Leonardo can work together with a human to solve a common problem as much as his body allows. He’s more effective at working shoulder-to-shoulder with a human because of the theory of mind work that is blended with his programming. In a task where one human wants cookies and another crackers from two locked locations and one of them has switched the locations, Leonardo can watch the first human trying to get to where he thinks the cookies are and open a box with cookies, helping him achieve his goal. All of Leonardo’s social skills work together so it can work alongside humans. When a human asks it to do a task, it can indicate what it knows or doesn’t know and what it can and cannot do. Communicating through expression and gesture and through perceiving expression, gesture, and speech, the robot is able to work as part of a team.[11]

Contributors

  • Professor Cynthia Breazeal
  • Stan Winston
  • Lindsay MacGowan (Artistic Lead)
  • Richard Landon (Technical Lead)
  • The Stan Winston Studios Team
    • Jon Dawe
    • Trevor Hensley
    • Matt Heimlich
    • Al Sousa
    • Kathy Macgowan
    • Michael Ornealez
    • Amy Whetsel
    • Joe Reader
    • Grady Holder
    • Rob Ramsdell
    • John Cherevka
    • Rodrick Khachatoorian
    • Kurt Herbel
    • Rich Haugen
    • Keith Marbory
    • Annabelle Troukins
  • Fardad Faridi (Animator)
  • Graduate Students
    • Matt Berlin
    • Andrew “Zoz” Brooks
    • Jesse Gray
    • Guy Hoffman
    • Jeff Lieberman
    • Andrea Lockerd Thomaz
    • Dan Stiehl
  • Matt Hancher (Alumni)
  • Hans Lee (Alumni)

Further Reading

References

  1. ^ "Furry Robots, Foldable Cars and More Innovations from MIT's Media Lab". PBS. 2011-05-20.
  2. ^ "Leonardo Project Home Page". MIT Media Lab Personal Robots Group. Retrieved 2012-02-27.
  3. ^ a b Robert Capps (2006-01). "The 50 Best Robots Ever". Wired Magazine. {{cite web}}: Check date values in: |date= (help)
  4. ^ a b "Leonardo Project Body Page". MIT Media Lab Personal Robots Group. Retrieved 2012-02-27.
  5. ^ ""Leonardo Project Vision Page"". MIT Media Lab Personal Robots Group. Retrieved 2012-02-27.
  6. ^ ""Leonardo Project Skin Page"". MIT Media Lab Personal Robots Group. Retrieved 2012-02-27.
  7. ^ "Leonardo Project Social Learning Page". MIT Media Lab Personal Robots Group. Retrieved 2012-02-27.
  8. ^ Andrew Brooks and Cynthia Breazeal (2006). "Working with Robots and Objects: Revisiting Deictic Reference for Achieving Spatial Common Ground". Human Robot Interaction. {{cite journal}}: Cite journal requires |journal= (help); Italic or bold markup not allowed in: |publisher= (help); Unknown parameter |city= ignored (|location= suggested) (help)
  9. ^ ""Leonardo Project Social Learning Page"". MIT Media Lab Personal Robots Group. Retrieved 2012-02-27.
  10. ^ "Leonardo Project Social Cognition Page". MIT Media Lab Personal Robots Group. Retrieved 2012-02-27.
  11. ^ "Project Leonardo Teamwork Page". MIT Media Lab Personal Robots Group. Retrieved 2012-02-27.

Bibliography

See also