Learning by demonstration with critique from a human teacher
Proceedings of the ACM/IEEE international conference on Human-robot interaction
Learning to Combine Motor Primitives Via Greedy Additive Regression
The Journal of Machine Learning Research
A survey of robot learning from demonstration
Robotics and Autonomous Systems
Automatic weight learning for multiple data sources when learning from demonstration
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Self-emerging action gestalts for task segmentation
KI'09 Proceedings of the 32nd annual German conference on Advances in artificial intelligence
Finding and transferring policies using stored behaviors
Autonomous Robots
Teacher feedback to scaffold and refine demonstrated motion primitives on a mobile robot
Robotics and Autonomous Systems
Policy adaptation with tactile feedback
Proceedings of the 6th international conference on Human-robot interaction
Tactile Guidance for Policy Adaptation
Foundations and Trends in Robotics
Reinforcement learning in robotics: A survey
International Journal of Robotics Research
Learning collaborative team behavior from observation
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
Learning without any prior knowledge in environments that contain large or continuous state spaces is a daunting task. For robots that operate in the real world, learning must occur in a reasonable amount of time. Providing a robot with domain knowledge and also with the ability to learn from watching other can greatly increase its learning rate. This research seeks to explore learning algorithms that can learn quickly and make the most use of information obtained from observing others. Domain knowledge is encoded in the form of primitives, small parts of a task that are executed many times while a task is being performed. This thesis explores and presents difficulties involved in getting robots to learn and adapt to environments that humans operate in. Virtual and real-world environments of air hockey and the marble maze game have been created as test-beds for this research. A humanoid robot has been programmed to operate in the air hockey environment and a Labyrinth game has been equipped with motors and sensors to allow it to be controlled by a computer. A “Learning from Observation Using Primitives” framework has been created. This framework provides the means to observe primitives as they are performed by others. This information is then used by the robot in a three-level process as it performs in the environment. Our initial research has shown that using only observed information leads to varying degrees of success. Therefore, the robot must have the ability to learn from practice while operating in the environment. Our framework provides a means for the robot to observe and evaluate its own actions as it performs in the environment. This information can then be used to increase the performance of selecting and performing the primitives.