Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning
Artificial Intelligence
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Apprenticeship learning via inverse reinforcement learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Modeling changing dependency structure in multivariate time series
Proceedings of the 24th international conference on Machine learning
Confidence-based policy learning from demonstration using Gaussian mixture models
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Neurocomputing
Automatic discovery and transfer of MAXQ hierarchies
Proceedings of the 25th international conference on Machine learning
A survey of robot learning from demonstration
Robotics and Autonomous Systems
Regularization and feature selection in least-squares temporal difference learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Learning complex motions by sequencing simpler motion templates
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Discovering options from example trajectories
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Machine learning for fast quadrupedal locomotion
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
Building portable options: skill transfer in reinforcement learning
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Efficient skill learning using abstraction selection
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Online segmentation and clustering from continuous observation of whole body motions
IEEE Transactions on Robotics
Programming robots using reinforcement learning and teaching
AAAI'91 Proceedings of the ninth National conference on Artificial intelligence - Volume 2
Policy search for motor primitives in robotics
Machine Learning
Robot control code generation by task demonstration in a dynamic environment
Robotics and Autonomous Systems
Proceedings of the 2nd Workshop on Machine Learning for Interactive Systems: Bridging the Gap Between Perception, Action and Communication
Reinforcement learning in robotics: A survey
International Journal of Robotics Research
Movement primitives as a robotic tool to interpret trajectories through learning-by-doing
International Journal of Automation and Computing
Object-object interaction affordance learning
Robotics and Autonomous Systems
Hi-index | 0.00 |
We describe CST, an online algorithm for constructing skill trees from demonstration trajectories. CST segments a demonstration trajectory into a chain of component skills, where each skill has a goal and is assigned a suitable abstraction from an abstraction library. These properties permit skills to be improved efficiently using a policy learning algorithm. Chains from multiple demonstration trajectories are merged into a skill tree. We show that CST can be used to acquire skills from human demonstration in a dynamic continuous domain, and from both expert demonstration and learned control sequences on the uBot-5 mobile manipulator.