PAC-learnability of determinate logic programs
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Elements of machine learning
Learning action models for reactive autonomous agents
Learning action models for reactive autonomous agents
Reinforcement learning with hierarchies of machines
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning
Artificial Intelligence
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Discovery as Autonomous Learning from the Environment
Machine Learning
The MAXQ Method for Hierarchical Reinforcement Learning
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
RL-TOPS: An Architecture for Modularity and Re-Use in Reinforcement Learning
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Learning to Fly: An Application of Hierarchical Reinforcement Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
LIME: A System for Learning Relations
ALT '98 Proceedings of the 9th International Conference on Algorithmic Learning Theory
Relational Reinforcement Learning
ILP '98 Proceedings of the 8th International Workshop on Inductive Logic Programming
Teleo-reactive programs for agent control
Journal of Artificial Intelligence Research
Machine Learning and Inductive Logic Programming for Multi-agent Systems
EASSS '01 Selected Tutorial Papers from the 9th ECCAI Advanced Course ACAI 2001 and Agent Link's 3rd European Agent Systems Summer School on Multi-Agent Systems and Applications
Hi-index | 0.00 |
Hierarchical reinforcement learning has been proposed as a solution to the problem of scaling up reinforcement learning. The RLTOPs Hierarchical Reinforcement Learning System is an implementation of this proposal which structures an agent's sensors and actions into various levels of representation and control. Disparity between levels of representation means actions can be misused by the planning algorithm in the system. This paper reports on how ILP was used to bridge these representation gaps and shows empirically how this improved the system's performance. Also discussed are some of the problems encountered when using an ILP system in what is inherently a noisy and incremental domain.