Robot learning

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Robot learning is a research field at the intersection of machine learning and robotics. It studies techniques allowing a robot to acquire novel skills or adapt to its environment through learning algorithms. The embodiment of the robot, situated in a physical embedding, provides at the same time specific difficulties (e.g. high-dimensionality, real time constraints for collecting data and learning) and opportunities for guiding the learning process (e.g. sensorimotor synergies, motor primitives).

Example of skills that are targeted by learning algorithms include sensorimotor skills such as locomotion, grasping, active object categorization, as well as interactive skills such as joint manipulation of an object with a human peer, and linguistic skills such as the grounded and situated meaning of human language. Learning can happen either through autonomous self-exploration or through guidance from a human teacher, like for example in robot learning by imitation.

Robot learning can be closely related to adaptive control, reinforcement learning as well as developmental robotics which considers the problem of autonomous lifelong acquisition of repertoires of skills. While machine learning is frequently used by computer vision algorithms employed in the context of robotics, these applications are usually not referred to as "robot learning".


Projects[edit]

Maya Cakmak, assistant professor of computer science and engineering at the University of Washington, is trying to create a robot that learns by imitating - a technique called "programming by demonstration". A researcher shows it a cleaning technique for the robot's vision system and it generalizes the cleaning motion from the human demonstration as well as identifying the "state of dirt" before and after cleaning.[1]

Similarly the Baxter industrial robot can be taught how to do something by grabbing its arm and showing it the desired movements.[2] It can also use deep learning to teach itself to grasp an unknown object.[3][4]

Sharing learned skills and knowledge[edit]

Further information: Cloud robotics

In Telex's "Million Object Challenge" the goal is robots that learn how to spot and handle simple items and upload their data to the cloud to allow other robots to analyze and use the information.[4]

RoboBrain is a knowledge engine for robots which can be freely accessed by any device wishing to carry out a task. The database gathers new information about tasks as robots perform them, by searching the Internet, interpreting natural language text, images, and videos, object recognition as well as interaction. The project is led by Ashutosh Saxena at Stanford University.[5][6]

RoboEarth is a project that has been described as a "World Wide Web for robots" − it is a network and database repository where robots can share information and learn from each other and a cloud for outsourcing heavy computation tasks. The project brings together researchers from five major universities in Germany, the Netherlands and Spain and is backed by the European Union.[7][8][9][10][11]

Google Research, DeepMind, and Google X have decided to allow their robots share their experiences.[12][13][14]

See also[edit]

References[edit]

  1. ^ Rosenblum, Andrew. "The robot you want most is far from reality". MIT Technology Review. Retrieved 4 January 2017. 
  2. ^ "Hands-on with Baxter, the factory robot of the future". Ars Technica. Retrieved 4 January 2017. 
  3. ^ "Deep-Learning Robot Takes 10 Days to Teach Itself to Grasp". MIT Technology Review. Retrieved 4 January 2017. 
  4. ^ a b Schaffer, Amanda. "10 Breakthrough Technologies 2016: Robots That Teach Each Other". MIT Technology Review. Retrieved 4 January 2017. 
  5. ^ "RoboBrain: The World's First Knowledge Engine For Robots". MIT Technology Review. Retrieved 4 January 2017. 
  6. ^ Hernandez, Daniela. "The Plan to Build a Massive Online Brain for All the World's Robots". WIRED. Retrieved 4 January 2017. 
  7. ^ "Europe launches RoboEarth: 'Wikipedia for robots'". USA TODAY. Retrieved 4 January 2017. 
  8. ^ "European researchers have created a hive mind for robots and it's being demoed this week". Engadget. Retrieved 4 January 2017. 
  9. ^ "Robots test their own world wide web, dubbed RoboEarth". BBC News. 14 January 2014. Retrieved 4 January 2017. 
  10. ^ "'Wikipedia for robots': Because bots need an Internet too". CNET. Retrieved 4 January 2017. 
  11. ^ "New Worldwide Network Lets Robots Ask Each Other Questions When They Get Confused". Popular Science. Retrieved 4 January 2017. 
  12. ^ "Google Tasks Robots with Learning Skills from One Another via Cloud Robotics". allaboutcircuits.com. Retrieved 4 January 2017. 
  13. ^ Tung, Liam. "Google's next big step for AI: Getting robots to teach each other new skills | ZDNet". ZDNet. Retrieved 4 January 2017. 
  14. ^ "How Robots Can Acquire New Skills from Their Shared Experience". Google Research Blog. Retrieved 4 January 2017. 

External links[edit]