Induction of ripple-down rules applied to modeling large databases
Journal of Intelligent Information Systems
Recent Advances in Hierarchical Reinforcement Learning
Discrete Event Dynamic Systems
Automatic Discovery of Subgoals in Reinforcement Learning using Diverse Density
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Intra-Option Learning about Temporally Abstract Actions
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Reinforcement learning with selective perception and hidden state
Reinforcement learning with selective perception and hidden state
Temporal abstraction in reinforcement learning
Temporal abstraction in reinforcement learning
Autonomous discovery of temporal abstractions from interaction with an environment
Autonomous discovery of temporal abstractions from interaction with an environment
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 4
Hierarchical reinforcement learning with the MAXQ value function decomposition
Journal of Artificial Intelligence Research
Transfer Learning for Reinforcement Learning Domains: A Survey
The Journal of Machine Learning Research
Hi-index | 0.00 |
An option is a policy fragment that represents a solution to a frequent subproblem encountered in a domain. Options may be treated as temporally extended actions thus allowing us to reuse that solution in solving larger problems. Often, it is hard to find subproblems that are exactly the same. These differences, however small, need to be accounted for in the reused policy. In this paper, the notion of options with exceptions is introduced to address such scenarios. This is inspired by the Ripple Down Rules approach used in data mining and knowledge representation communities. The goal is to develop an option representation so that small changes in the subproblem solutions can be accommodated without losing the original solution. We empirically validate the proposed framework on a simulated game domain.