Incremental learning of relational action models in noisy environments

  • Authors:
  • Christophe Rodrigues;Pierre Gérard;Céline Rouveirol

  • Affiliations:
  • LIPN/A3, University of Paris;LIPN/A3, University of Paris;LIPN/A3, University of Paris

  • Venue:
  • ILP'10 Proceedings of the 20th international conference on Inductive logic programming
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the Relational Reinforcement Learning framework, we propose an algorithm that learns an action model (or an approximation of the transition function) in order to predict the resulting state of an action in a given situation. This algorithm learns incrementally a set of first order rules in a noisy environment following a data-driven loop. Each time a new example is presented that contradicts the current action model, the model is revised (by generalization and/or specialization). As opposed to a previous version of our algorithm that operates in a noise-free context, we introduce here a number of indicators attached to each rule that allows to evaluate if the revision should take place immediately or should be delayed. We provide an empirical evaluation on usual RRL benchmarks.