Dyna, an integrated architecture for learning, planning, and reacting
ACM SIGART Bulletin
Automated Refinement of First-Order Horn-Clause Domain Theories
Machine Learning
Relational reinforcement learning
Machine Learning - Special issue on inducive logic programming
Near-Optimal Reinforcement Learning in Polynomial Time
Machine Learning
Discovery as Autonomous Learning from the Environment
Machine Learning
EMCL '01 Proceedings of the 12th European Conference on Machine Learning
RUTH: an ILP Theory Revision System
ISMIS '94 Proceedings of the 8th International Symposium on Methodologies for Intelligent Systems
Incremental learning with partial instance memory
Artificial Intelligence
Learning action models from plan examples using weighted MAX-SAT
Artificial Intelligence
Incremental learning and concept drift in INTHELEX
Intelligent Data Analysis
Knows what it knows: a framework for self-aware learning
Proceedings of the 25th international conference on Machine learning
Efficient learning of action schemas and web-service descriptions
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
The PELA architecture: integrating planning and learning to improve execution
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Learning symbolic models of stochastic domains
Journal of Artificial Intelligence Research
Online learning and exploiting relational models in reinforcement learning
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Utile distinctions for relational reinforcement learning
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Exploring compact reinforcement-learning representations with linear regression
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Exploration in relational worlds
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part II
Incremental Learning of Relational Action Rules
ICMLA '10 Proceedings of the 2010 Ninth International Conference on Machine Learning and Applications
Induction of the indirect effects of actions by monotonic methods
ILP'05 Proceedings of the 15th international conference on Inductive Logic Programming
Learning from interpretation transition
Machine Learning
Hi-index | 0.00 |
We consider an agent which learns a relational action model in order to be able to predict the effects of his actions. The model consists of a set of STRIPS-like rules, i.e. rules predicting what has changed in the current state when applying a given action as far as a set of preconditions is satisfied by the current state. Here several rules can be associated to a given action, therefore allowing to model conditional effects. Learning is online, as examples result from actions performed by the agent, and incremental, as the current action model is revised each time it is contradicted by unexpected effects resulting from his actions. The form of the model allows using it as an input of standard planners. In this work, the learning unit IRALe is embedded in an integrated system able to i) learn an action model ii) select its actions iii) plan to reach a goal. The agent uses the current action model to perform active learning, i.e. to select actions with the purpose of reaching states that will enforce a revision of the model, and uses its planning abilities to have a realistic evaluation of the accuracy of the model.