Learning from Positive and Unlabeled Examples
ALT '00 Proceedings of the 11th International Conference on Algorithmic Learning Theory
Apprenticeship learning via inverse reinforcement learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Automatic basis function construction for approximate dynamic programming and reinforcement learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
ML-KNN: A lazy learning approach to multi-label learning
Pattern Recognition
Analyzing feature generation for value-function approximation
Proceedings of the 24th international conference on Machine learning
Learning classifiers from only positive and unlabeled data
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
A survey of robot learning from demonstration
Robotics and Autonomous Systems
Regularization and feature selection in least-squares temporal difference learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Discovering options from example trajectories
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
State abstraction discovery from irrelevant state variables
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Reinforcement learning from simultaneous human and MDP reward
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Automatic task decomposition and state abstraction from demonstration
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
The arcade learning environment: an evaluation platform for general agents
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
Learning from Demonstration (LfD) is a popular technique for building decision-making agents from human help. Traditional LfD methods use demonstrations as training examples for supervised learning, but complex tasks can require more examples than is practical to obtain. We present Abstraction from Demonstration (AfD), a novel form of LfD that uses demonstrations to infer state abstractions and reinforcement learning (RL) methods in those abstract state spaces to build a policy. Empirical results show that AfD is greater than an order of magnitude more sample efficient than just using demonstrations as training examples, and exponentially faster than RL alone.