On Average Versus Discounted Reward Temporal-Difference Learning

  • Authors:
  • John N. Tsitsiklis;Benjamin Van Roy

  • Affiliations:
  • Laboratory for Information and Decision Systems, M.I.T., Cambridge, MA 01239, USA. jnt@mit.edu;Department of Management Science and Engineering and Electrical Engineering, Stanford University, Stanford, CA 94305, USA. bvr@stanford.edu

  • Venue:
  • Machine Learning
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

We provide an analytical comparison between discounted and average reward temporal-difference (TD) learning with linearly parameterized approximations. We first consider the asymptotic behavior of the two algorithms. We show that as the discount factor approaches 1, the value function produced by discounted TD approaches the differential value function generated by average reward TD. We further argue that if the constant function—which is typically used as one of the basis functions in discounted TD—is appropriately scaled, the transient behaviors of the two algorithms are also similar. Our analysis suggests that the computational advantages of average reward TD that have been observed in some prior empirical work may have been caused by inappropriate basis function scaling rather than fundamental differences in problem formulations or algorithms.