Proceedings of the seventh international conference (1990) on Machine learning
Multistrategy Theory Revision: Induction and Abductionin INTHELEX
Machine Learning - Special issue on multistrategy learning
Relational reinforcement learning
Machine Learning - Special issue on inducive logic programming
Discovery as Autonomous Learning from the Environment
Machine Learning
EMCL '01 Proceedings of the 12th European Conference on Machine Learning
Learning action models from plan examples using weighted MAX-SAT
Artificial Intelligence
Knows what it knows: a framework for self-aware learning
Proceedings of the 25th international conference on Machine learning
Efficient learning of action schemas and web-service descriptions
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
Learning symbolic models of stochastic domains
Journal of Artificial Intelligence Research
Online learning and exploiting relational models in reinforcement learning
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Exploring compact reinforcement-learning representations with linear regression
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Incremental Learning of Relational Action Rules
ICMLA '10 Proceedings of the 2010 Ninth International Conference on Machine Learning and Applications
Hi-index | 0.00 |
In the Relational Reinforcement Learning framework, we propose an algorithm that learns an action model (or an approximation of the transition function) in order to predict the resulting state of an action in a given situation. This algorithm learns incrementally a set of first order rules in a noisy environment following a data-driven loop. Each time a new example is presented that contradicts the current action model, the model is revised (by generalization and/or specialization). As opposed to a previous version of our algorithm that operates in a noise-free context, we introduce here a number of indicators attached to each rule that allows to evaluate if the revision should take place immediately or should be delayed. We provide an empirical evaluation on usual RRL benchmarks.