Autonomous agent learning using an actor-critic algorithm and behavior models

  • Authors:
  • Victor Uc Cetina

  • Affiliations:
  • Humboldt University of Berlin, Berlin, Germany

  • Venue:
  • Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 3
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We introduce a Supervised Reinforcement Learning (SRL) algorithm for autonomous learning problems where an agent is required to deal with high dimensional spaces. In our learning algorithm, behavior models learned from a set of examples, are used to dynamically reduce the set of relevant actions at each state of the environment encountered by the agent. Such subsets of actions are used to guide the agent through promising parts of the action space, avoiding the selection of useless actions. The algorithm handles continuous states and actions. Our experimental work with a difficult robot learning task shows clearly how this approach can significantly speed up the learning process and improve the final performance.