An off-policy natural policy gradient method for a partial observable Markov decision process

  • Authors:
  • Yutaka Nakamura;Takeshi Mori;Shin Ishii

  • Affiliations:
  • Nara Institute of Science and Technology;Nara Institute of Science and Technology;Nara Institute of Science and Technology

  • Venue:
  • ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

There has been a problem called "exploration-exploitation problem" in the field of reinforcement learning. An agent must decide whether to explore a better action which may not necessarily exist, or to exploit many rewards by taking the current best action. In this article, we propose an off-policy reinforcement learning method based on a natural policy gradient learning, as a solution of the exploration-exploitation problem. In our method, the policy gradient is estimated based on a sequence of state-action pairs sampled by performing an arbitrary "behavior policy"; this allows us to deal with the exploration-exploitation problem by handling the generation process of behavior policies. By applying to an autonomous control problem of a three-dimensional cartpole, we show that our method can realize an optimal control efficiently in a partially observable domain.