Metrics for finite Markov decision processes

  • Authors:
  • Norm Ferns;Prakash Panangaden;Doina Precup

  • Affiliations:
  • McGill University, Montréal, Canada;McGill University, Montréal, Canada;McGill University, Montréal, Canada

  • Venue:
  • UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present metrics for measuring the similarity of states in a finite Markov decision process (MDP). The formulation of our metrics is based on the notion of bisimulation for MDPs, with an aim towards solving discounted infinite horizon reinforcement learning tasks. Such metrics can be used to aggregate states, as well as to better structure other value function approximators (e.g., memory-based or nearest-neighbor approximators). We provide bounds that relate our metric distances to the optimal values of states in the given MDP.