Active Learning for Reward Estimation in Inverse Reinforcement Learning

  • Authors:
  • Manuel Lopes;Francisco Melo;Luis Montesano

  • Affiliations:
  • Instituto de Sistemas e Robótica - Instituto Superior Técnico, Lisboa, Portugal;Carnegie Mellon University, Pittsburgh, USA;Universidad de Zaragoza, Zaragoza, Spain

  • Venue:
  • ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Inverse reinforcement learning addresses the general problem of recovering a reward function from samples of a policy provided by an expert/demonstrator. In this paper, we introduce active learning for inverse reinforcement learning. We propose an algorithm that allows the agent to query the demonstrator for samples at specific states, instead of relying only on samples provided at "arbitrary" states. The purpose of our algorithm is to estimate the reward function with similar accuracy as other methods from the literature while reducing the amount of policy samples required from the expert. We also discuss the use of our algorithm in higher dimensional problems, using both Monte Carlo and gradient methods. We present illustrative results of our algorithm in several simulated examples of different complexities.