A New Natural Policy Gradient by Stationary Distribution Metric

  • Authors:
  • Tetsuro Morimura;Eiji Uchibe;Junichiro Yoshimoto;Kenji Doya

  • Affiliations:
  • Initial Research Project, Okinawa Institute of Science and Technology, , and IBM Research, Tokyo Research Laboratory, ,;Initial Research Project, Okinawa Institute of Science and Technology, ,;Initial Research Project, Okinawa Institute of Science and Technology, , and Graduate School of Information Science, Nara Institute of Science and Technology, ,;Initial Research Project, Okinawa Institute of Science and Technology, , and Graduate School of Information Science, Nara Institute of Science and Technology, , and ATR Computational Neuroscience ...

  • Venue:
  • ECML PKDD '08 Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

The parameter space of a statistical learning machine has a Riemannian metric structure in terms of its objective function. [1] Amari proposed the concept of "natural gradient" that takes the Riemannian metric of the parameter space into account. Kakade [2] applied it to policy gradient reinforcement learning, called a natural policy gradient (NPG). Although NPGs evidently depend on the underlying Riemannian metrics, careful attention was not paid to the alternative choice of the metric in previous studies. In this paper, we propose a Riemannian metric for the joint distribution of the state-action, which is directly linked with the average reward, and derive a new NPG named "Natural State-action Gradient"(NSG). Then, we prove that NSG can be computed by fitting a certain linear model into the immediate reward function. In numerical experiments, we verify that the NSG learning can handle MDPs with a large number of states, for which the performances of the existing (N)PG methods degrade.