Decision Tree Induction Based on Efficient Tree Restructuring
Machine Learning
Planning and acting in partially observable stochastic domains
Artificial Intelligence
Relational reinforcement learning
Machine Learning - Special issue on inducive logic programming
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
A Study of Two Sampling Methods for Analyzing Large Datasets with ILP
Data Mining and Knowledge Discovery
Linkage and Autocorrelation Cause Feature Selection Bias in Relational Learning
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Reinforcement learning with selective perception and hidden state
Reinforcement learning with selective perception and hidden state
Exploiting relational structure to understand publication patterns in high-energy physics
ACM SIGKDD Explorations Newsletter
The thing that we tried didn't work very well: deictic representation in reinforcement learning
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Transfer Learning in Reinforcement Learning Problems Through Partial Policy Recycling
ECML '07 Proceedings of the 18th European conference on Machine Learning
Active learning of relational action models
ILP'11 Proceedings of the 21st international conference on Inductive Logic Programming
Hi-index | 0.00 |
We introduce an approach to autonomously creating state space abstractions for an online reinforcement learning agent using a relational representation. Our approach uses a tree-based function approximation derived from McCallum's [1995] UTree algorithm. We have extended this approach to use a relational representation where relational observations are represented by attributed graphs [McGovern et al., 2003]. We address the challenges introduced by a relational representation by using stochastic sampling to manage the search space [Srinivasan, 1999] and temporal sampling to manage autocorrelation [Jensen and Neville, 2002]. Relational UTree incorporates Iterative Tree Induction [Utgoff et al., 1997] to allow it to adapt to changing environments. We empirically demonstrate that Relational UTree performs better than similar relational learning methods [Finney et al., 2002; Driessens et al., 2001] in a blocks world domain. We also demonstrate that Relational UTree can learn to play a sub-task of the game of Go called Tsume-Go [Ramon et al., 2001].