A stochastic approximation method with max-norm projections and its applications to the Q-learning algorithm

  • Authors:
  • Sumit Kunnumkal;Huseyin Topaloglu

  • Affiliations:
  • Indian School of Business, Gachibowli, Hyderabad;Cornell University, Ithaca, NY

  • Venue:
  • ACM Transactions on Modeling and Computer Simulation (TOMACS)
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this article, we develop a stochastic approximation method to solve a monotone estimation problem and use this method to enhance the empirical performance of the Q-learning algorithm when applied to Markov decision problems with monotone value functions. We begin by considering a monotone estimation problem where we want to estimate the expectation of a random vector, η. We assume that the components of E {η} are known to be in increasing order. The stochastic approximation method that we propose is designed to exploit this information by projecting its iterates onto the set of vectors with increasing components. The novel aspect of the method is that it uses projections with respect to the max norm. We show the almost sure convergence of the stochastic approximation method. After this result, we consider the Q-learning algorithm when applied to Markov decision problems with monotone value functions. We study a variant of the Q-learning algorithm that uses projections to ensure that the value function approximation obtained at each iteration is also monotone. Computational results indicate that the performance of the Q-learning algorithm can be improved significantly by exploiting the monotonicity property of the value functions.