Model-based online learning of POMDPs

  • Authors:
  • Guy Shani;Ronen I. Brafman;Solomon E. Shimony

  • Affiliations:
  • Ben-Gurion University, Beer-Sheva, Israel;Ben-Gurion University, Beer-Sheva, Israel;Ben-Gurion University, Beer-Sheva, Israel

  • Venue:
  • ECML'05 Proceedings of the 16th European conference on Machine Learning
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Learning to act in an unknown partially observable domain is a difficult variant of the reinforcement learning paradigm. Research in the area has focused on model-free methods — methods that learn a policy without learning a model of the world. When sensor noise increases, model-free methods provide less accurate policies. The model-based approach — learning a POMDP model of the world, and computing an optimal policy for the learned model — may generate superior results in the presence of sensor noise, but learning and solving a model of the environment is a difficult problem. We have previously shown how such a model can be obtained from the learned policy of model-free methods, but this approach implies a distinction between a learning phase and an acting phase that is undesirable. In this paper we present a novel method for learning a POMDP model online, based on McCallums' Utile Suffix Memory (USM), in conjunction with an approximate policy obtained using an incremental POMDP solver. We show that the incrementally improving policy provides superior results to the original USM algorithm, especially in the presence of increasing sensor and action noise.