Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning
Artificial Intelligence
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
A Bayesian Framework for Reinforcement Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Reinforcement learning with selective perception and hidden state
Reinforcement learning with selective perception and hidden state
Using relative novelty to identify useful temporal abstractions in reinforcement learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Dynamic abstraction in reinforcement learning via clustering
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Hierarchical reinforcement learning with the MAXQ value function decomposition
Journal of Artificial Intelligence Research
SMDP homomorphisms: an algebraic approach to abstraction in semi-Markov decision processes
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Model minimization in Markov decision processes
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Model based Bayesian exploration
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Transfer Learning and Intelligence: an Argument and Approach
Proceedings of the 2008 conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference
Learning and multiagent reasoning for autonomous agents
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Learning to generalize and reuse skills using approximate partial policy homomorphisms
SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
Structural knowledge transfer by spatial abstraction for reinforcement learning agents
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Automatic state abstraction from demonstration
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Multi-Task reinforcement learning: shaping and feature selection
EWRL'11 Proceedings of the 9th European conference on Recent Advances in Reinforcement Learning
Automatic task decomposition and state abstraction from demonstration
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Model selection in markovian processes
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Learning potential functions and their representations for multi-task reinforcement learning
Autonomous Agents and Multi-Agent Systems
Hi-index | 0.00 |
Abstraction is a powerful form of domain knowledge that allows reinforcement-learning agents to cope with complex environments, but in most cases a human must supply this knowledge. In the absence of such prior knowledge or a given model, we propose an algorithm for the automatic discovery of state abstraction from policies learned in one domain for use in other domains that have similar structure. To this end, we introduce a novel condition for state abstraction in terms of the relevance of state features to optimal behavior, and we exhibit statistical methods that detect this condition robustly. Finally, we show how to apply temporal abstraction to benefit safely from even partial state abstraction in the presence of generalization error.