Considering Unseen States as Impossible in Factored Reinforcement Learning

  • Authors:
  • Olga Kozlova;Olivier Sigaud;Pierre-Henri Wuillemin;Christophe Meyer

  • Affiliations:
  • Institut des Systèmes Intelligents et de Robotique, Université Pierre et Marie Curie - Paris 6, CNRS UMR 7222, Paris, France F-75005 and Thales Security Solutions & Services, Simulation, ...;Institut des Systèmes Intelligents et de Robotique, Université Pierre et Marie Curie - Paris 6, CNRS UMR 7222, Paris, France F-75005;Laboratoire d'Informatique de Paris 6, Université Pierre et Marie Curie - Paris 6, CNRS UMR 7606, Paris, France F-75005;Thales Security Solutions & Services, ThereSIS Research and Innovation Office, Palaiseau, France 91767

  • Venue:
  • ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part I
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Factored Markov Decision Process (fmdp ) framework is a standard representation for sequential decision problems under uncertainty where the state is represented as a collection of random variables. Factored Reinforcement Learning (frl ) is an Model-based Reinforcement Learning approach to fmdps where the transition and reward functions of the problem are learned. In this paper, we show how to model in a theoretically well-founded way the problems where some combinations of state variable values may not occur, giving rise to impossible states. Furthermore, we propose a new heuristics that considers as impossible the states that have not been seen so far. We derive an algorithm whose improvement in performance with respect to the standard approach is illustrated through benchmark experiments.