Convergence of reinforcement learning with general function approximators

  • Authors:
  • Vassilis A. Papavassiliou;Stuart Russell

  • Affiliations:
  • Computer Science Division, U. of California, Berkeley, CA;Computer Science Division, U. of California, Berkeley, CA

  • Venue:
  • IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

A key open problem in reinforcement learning is to assure convergence when using a compact hypothesis class to approximate the value function. Although the standard temporal-difference learning algorithm has been shown to converge when the hypothesis class is a linear combination of fixed basis functions, it may diverge with a general (non-linear) hypothesis class. This paper describes the Bridge algorithm, a new method for reinforcement learning, and shows that it converges to an approximate global optimum for any agnostically learnable hypothesis class. Convergence is demonstrated on a simple example for which temporal-difference learning fails. Weak conditions are identified under which the Bridge algorithm converges for any hypothesis class. Finally, connections are made between the complexity of reinforcement learning and the PAC-learnability of the hypothesis class.