Inductive policy selection for first-order MDPs

  • Authors:
  • SungWook Yoon;Alan Fern;Robert Givan

  • Affiliations:
  • School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN;School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN;School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN

  • Venue:
  • UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

We select policies for large Markov Decision Processes (MDPs) with compact first-order representations. We find policies that generalize well as the number of objects in the domain grows, potentially without bound. Existing dynamic-programming approaches based on flat, propositional, or first-order representations either are impractical here or do not naturally scale as the number of objects grows without bound. We implement and evaluate an alternative approach that induces first-order policies using training data constructed by solving small problem instances using PGraphplan (Blurn & Langford, 1999). Our policies are represented as ensembles of decision lists, using a taxonomic concept language. This approach extends the work of Martin and Geffner (2000) to stochastic domains, ensemble learning, and a wider variety of problems. Empirically, we find "good" policies for several stochastic first-order MDPs that are beyond the scope of previous approaches. We also discuss the application of this work to the relational reinforcement-learning problem.