LC-Learning: Phased Method for Average Reward Reinforcement Learning - Analysis of Optimal Criteria

  • Authors:
  • Taro Konda;Tomohiro Yamaguchi

  • Affiliations:
  • -;-

  • Venue:
  • PRICAI '02 Proceedings of the 7th Pacific Rim International Conference on Artificial Intelligence: Trends in Artificial Intelligence
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents an analysis of criteria which measure policy optimality for average reward reinforcement learning. In previous works for undiscounted tasks, two criteria, gain-optimality and bias-optimality have been presented. The former is one to measure an average reward and the latter is one to evaluate transient actions. However, a limit factor in the definition of the gain-optimality makes real meaning of the criterion unclear, and what si worse, the performance function for the bias-optimality does not always converge. Thus, previous methods calculate an optimal policy with approximation approaches, that is, they don't always acquire the optimal policy because of some finite errors. In addition, the theoretical proof of the convergence to the optimal policy is a difficult task. To eliminate ambiguity over these criteria, we show a necessary and sufficient condition of the gain-optimality: if and only if a policy is gain-optimal, it includes an optimal cycle-In other words, we only need to search a stationary cycle that has the highest average reward to find a gain optimal policy. We also make the performance function for the bias-optimality always converge by dividing it into two terms cycle-bias-value and path-bias-value. Finally, we build foundation of LC-learning, an algorithm for computing the bias optimal policy in a cyclic domain.