Model Minimization in Hierarchical Reinforcement Learning

  • Authors:
  • Balaraman Ravindran;Andrew G. Barto

  • Affiliations:
  • -;-

  • Venue:
  • Proceedings of the 5th International Symposium on Abstraction, Reformulation and Approximation
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

When applied to real world problems Markov Decision Processes (MDPs) often exhibit considerable implicit redundancy, especially when there are symmetries in the problem. In this article we present an MDP minimization framework based on homomorphisms. The framework exploits redundancy and symmetry to derive smaller equivalent models of the problem. We then apply our minimization ideas to the options framework to derive relativized options--options defined without an absolute frame of reference. We demonstrate their utility empirically even in cases where the minimization criteria are not met exactly.