Generalized subsumption and its applications to induction and redundancy
Artificial Intelligence
Foundations of Inductive Logic Programming
Foundations of Inductive Logic Programming
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Relational Reinforcement Learning
Machine Learning
Dynamic Programming
NAACL '01 Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies
Generalizing plans to new environments in relational MDPs
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Symbolic dynamic programming for first-order MDPs
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 1
Non-parametric policy gradients: a unified treatment of propositional and relational domains
Proceedings of the 25th international conference on Machine learning
Structured machine learning: the next ten years
Machine Learning
θ-Subsumption Based on Object Context
Inductive Logic Programming
An Experiment in Robot Discovery with ILP
ILP '08 Proceedings of the 18th international conference on Inductive Logic Programming
Practical solution techniques for first-order MDPs
Artificial Intelligence
Approximate inference for planning in stochastic relational worlds
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
An Inductive Logic Programming Approach to Statistical Relational Learning
Proceedings of the 2005 conference on An Inductive Logic Programming Approach to Statistical Relational Learning
Relevance Grounding for Planning in Relational Domains
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part I
Journal of Artificial Intelligence Research
FLUCAP: a heuristic search planner for first-order MDPs
Journal of Artificial Intelligence Research
First order decision diagrams for relational MDPs
Journal of Artificial Intelligence Research
Online learning and exploiting relational models in reinforcement learning
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
First order decision diagrams for relational MDPs
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Intensional dynamic programming. A Rosetta stone for structured dynamic programming
Journal of Algorithms
Generalized first order decision diagrams for first order Markov decision processes
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Learning models of relational MDPs using graph kernels
MICAI'07 Proceedings of the artificial intelligence 6th Mexican international conference on Advances in artificial intelligence
Building relational world models for reinforcement learning
ILP'07 Proceedings of the 17th international conference on Inductive logic programming
Probabilistic inductive logic programming
Automatic induction of bellman-error features for probabilistic planning
Journal of Artificial Intelligence Research
Planning with noisy probabilistic relational rules
Journal of Artificial Intelligence Research
Decision-theoretic planning with generalized first-order decision diagrams
Artificial Intelligence
Probabilistic relational planning with first order decision diagrams
Journal of Artificial Intelligence Research
Unique state and automatical action abstracting based on logical MDPs with negation
ICNC'06 Proceedings of the Second international conference on Advances in Natural Computation - Volume Part II
Stochastic abstract policies for knowledge transfer in robotic navigation tasks
MICAI'11 Proceedings of the 10th Mexican international conference on Advances in Artificial Intelligence - Volume Part I
Imitation learning in relational domains: a functional-gradient boosting approach
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Hi-index | 0.00 |
Motivated by the interest in relational reinforcement learning, we introduce a novel relational Bellman update operator called REBEL. It employs a constraint logic programming language to compactly represent Markov decision processes over relational domains. Using REBEL, a novel value iteration algorithm is developed in which abstraction (over states and actions) plays a major role. This framework provides new insights into relational reinforcement learning. Convergence results as well as experiments are presented.