Robot learning is a research field at the intersection of machine learning and robotics. It studies techniques allowing a robot to acquire novel skills or adapt to its environment through learning algorithms. The embodiment of the robot, situated in a physical embedding, provides at the same time specific difficulties (e.g. high-dimensionality, real time constraints for collecting data and learning) and opportunities for guiding the learning process (e.g. sensorimotor synergies, motor primitives).
Example of skills that are targeted by learning algorithms include sensorimotor skills such as locomotion, grasping, active object categorization, as well as interactive skills such as joint manipulation of an object with a human peer, and linguistic skills such as the grounded and situated meaning of human language. Learning can happen either through autonomous self-exploration or through guidance from a human teacher, like for example in robot learning by imitation.
Robot learning can be closely related to adaptive control, reinforcement learning as well as developmental robotics which considers the problem of autonomous lifelong acquisition of repertoires of skills. While machine learning is frequently used by computer vision algorithms employed in the context of robotics, these applications are usually not referred to as "robot learning".
This section needs expansion. You can help by adding to it. (January 2017)
Maya Cakmak, assistant professor of computer science and engineering at the University of Washington, is trying to create a robot that learns by imitating - a technique called "programming by demonstration". A researcher shows it a cleaning technique for the robot's vision system and it generalizes the cleaning motion from the human demonstration as well as identifying the "state of dirt" before and after cleaning.
Similarly the Baxter industrial robot can be taught how to do something by grabbing its arm and showing it the desired movements. It can also use deep learning to teach itself to grasp an unknown object.
Sharing learned skills and knowledge
RoboBrain is a knowledge engine for robots which can be freely accessed by any device wishing to carry out a task. The database gathers new information about tasks as robots perform them, by searching the Internet, interpreting natural language text, images, and videos, object recognition as well as interaction. The project is led by Ashutosh Saxena at Stanford University.
RoboEarth is a project that has been described as a "World Wide Web for robots" − it is a network and database repository where robots can share information and learn from each other and a cloud for outsourcing heavy computation tasks. The project brings together researchers from five major universities in Germany, the Netherlands and Spain and is backed by the European Union.
- Rosenblum, Andrew. "The robot you want most is far from reality". MIT Technology Review. Retrieved 4 January 2017.
- "Hands-on with Baxter, the factory robot of the future". Ars Technica. Retrieved 4 January 2017.
- "Deep-Learning Robot Takes 10 Days to Teach Itself to Grasp". MIT Technology Review. Retrieved 4 January 2017.
- Schaffer, Amanda. "10 Breakthrough Technologies 2016: Robots That Teach Each Other". MIT Technology Review. Retrieved 4 January 2017.
- "RoboBrain: The World's First Knowledge Engine For Robots". MIT Technology Review. Retrieved 4 January 2017.
- Hernandez, Daniela. "The Plan to Build a Massive Online Brain for All the World's Robots". WIRED. Retrieved 4 January 2017.
- "Europe launches RoboEarth: 'Wikipedia for robots'". USA TODAY. Retrieved 4 January 2017.
- "European researchers have created a hive mind for robots and it's being demoed this week". Engadget. Retrieved 4 January 2017.
- "Robots test their own world wide web, dubbed RoboEarth". BBC News. 14 January 2014. Retrieved 4 January 2017.
- "'Wikipedia for robots': Because bots need an Internet too". CNET. Retrieved 4 January 2017.
- "New Worldwide Network Lets Robots Ask Each Other Questions When They Get Confused". Popular Science. Retrieved 4 January 2017.
- "Google Tasks Robots with Learning Skills from One Another via Cloud Robotics". allaboutcircuits.com. Retrieved 4 January 2017.
- Tung, Liam. "Google's next big step for AI: Getting robots to teach each other new skills | ZDNet". ZDNet. Retrieved 4 January 2017.
- "How Robots Can Acquire New Skills from Their Shared Experience". Google Research Blog. Retrieved 4 January 2017.
- IEEE RAS Technical Committee on Robot Learning (official IEEE website)
- IEEE RAS Technical Committee on Robot Learning (TC members website)
- Robot Learning at the Max Planck Institute for Intelligent Systems and the Technical University Darmstadt
- Robot Learning at the Computational Learning and Motor Control lab
- Humanoid Robot Learning at the Advanced Telecommunication Research Center (ATR) (in English) (in Japanese)
- Learning Algorithms and Systems Laboratory at EPFL (LASA)
- Robot Learning at the Cognitive Robotics Lab of Juergen Schmidhuber at IDSIA and Technical University of Munich
- The Humanoid Project: Peter Nordin, Chalmers University of Technology
- Inria and Ensta ParisTech FLOWERS team, France: Autonomous lifelong learning in developmental robotics
- CITEC at University of Bielefeld, Germany
- Asada Laboratory, Department of Adaptive Machine Systems, Graduate School of Engineering, Osaka University, Japan
- The Laboratory for Perceptual Robotics, University of Massachusetts Amherst Amherst, USA
- Centre for Robotics and Neural Systems, Plymouth University Plymouth, United Kingdom
- Robot Learning Lab at Carnegie Mellon University
- Project Learning Humanoid Robots at University of Bonn
- Skilligent Robot Learning and Behavior Coordination System (commercial product)
- Robot Learning class at Cornell University
- Robot Learning and Interaction Lab at Italian Institute of Technology
- Reinforcement learning for robotics at Delft University of Technology