Relational reinforcement learning
Machine Learning - Special issue on inducive logic programming
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Probabilistic inference for solving discrete and continuous state Markov Decision Processes
ICML '06 Proceedings of the 23rd international conference on Machine learning
Approximate inference for planning in stochastic relational worlds
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Action-space partitioning for planning
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
Learning symbolic models of stochastic domains
Journal of Artificial Intelligence Research
Online learning and exploiting relational models in reinforcement learning
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Symbolic dynamic programming for first-order MDPs
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 1
Learning models of relational MDPs using graph kernels
MICAI'07 Proceedings of the artificial intelligence 6th Mexican international conference on Advances in artificial intelligence
Exploration in relational worlds
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part II
Planning with noisy probabilistic relational rules
Journal of Artificial Intelligence Research
Exploration in relational domains for model-based reinforcement learning
The Journal of Machine Learning Research
Hi-index | 0.00 |
Probabilistic relational models are an efficient way to learn and represent the dynamics in realistic environments consisting of many objects. Autonomous intelligent agents that ground this representation for all objects need to plan in exponentially large state spaces and large sets of stochastic actions. A key insight for computational efficiency is that successful planning typically involves only a small subset of relevant objects. In this paper, we introduce a probabilistic model to represent planning with subsets of objects and provide a definition of object relevance. Our definition is sufficient to prove consistency between repeated planning in partially grounded models restricted to relevant objects and planning in the fully grounded model. We propose an algorithm that exploits object relevance to plan efficiently in complex domains. Empirical results in a simulated 3D blocksworld with an articulated manipulator and realistic physics prove the effectiveness of our approach.