Artificial Intelligence
Relational reinforcement learning
Machine Learning - Special issue on inducive logic programming
Foundations of Inductive Logic Programming
Foundations of Inductive Logic Programming
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Generalizing plans to new environments in relational MDPs
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Symbolic dynamic programming for first-order MDPs
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 1
Inductive policy selection for first-order MDPs
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Hi-index | 0.00 |
In this paper we introduce negation into Logical Markov Decision Processes, which is a model of Relational Reinforcement Learning. In the new model nLMDP the abstract state space can be constructed in a simple way, so that a good property of complementarity holds. Prototype action is also introduced into the model. A distinct feature of the model is that applicable abstract actions can be obtained automatically with valid substitutions. Given a complementary abstract state space and a set of prototype actions, a model-free Θ-learing method is implemented for evaluating the state-action-substitution value funcion.