Active learning of relational action models

  • Authors:
  • Christophe Rodrigues;Pierre Gérard;Céline Rouveirol;Henry Soldano

  • Affiliations:
  • L.I.P.N, UMR-CNRS 7030, Université Paris-Nord, Villetaneuse, France;L.I.P.N, UMR-CNRS 7030, Université Paris-Nord, Villetaneuse, France;L.I.P.N, UMR-CNRS 7030, Université Paris-Nord, Villetaneuse, France;L.I.P.N, UMR-CNRS 7030, Université Paris-Nord, Villetaneuse, France

  • Venue:
  • ILP'11 Proceedings of the 21st international conference on Inductive Logic Programming
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider an agent which learns a relational action model in order to be able to predict the effects of his actions. The model consists of a set of STRIPS-like rules, i.e. rules predicting what has changed in the current state when applying a given action as far as a set of preconditions is satisfied by the current state. Here several rules can be associated to a given action, therefore allowing to model conditional effects. Learning is online, as examples result from actions performed by the agent, and incremental, as the current action model is revised each time it is contradicted by unexpected effects resulting from his actions. The form of the model allows using it as an input of standard planners. In this work, the learning unit IRALe is embedded in an integrated system able to i) learn an action model ii) select its actions iii) plan to reach a goal. The agent uses the current action model to perform active learning, i.e. to select actions with the purpose of reaching states that will enforce a revision of the model, and uses its planning abilities to have a realistic evaluation of the accuracy of the model.