Learning Generalized Policies from Planning Examples Using Concept Languages

  • Authors:
  • Mario Martín;Hector Geffner

  • Affiliations:
  • LSI Department, Universitat Politécnica de Catalunya, Jordi Girona 1-3, 08034 Barcelona (Catalunya), Spain. mmartin@lsi.upc.es;Departamento de Computación y TI, Universidad Simón Bolívar, Aptdo. 89000, Caracas, Venezuela. hector@usb.ve

  • Venue:
  • Applied Intelligence
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we are concerned with the problem of learning how to solve planning problems in one domain given a number of solved instances. This problem is formulated as the problem of inferring a function that operates over all instances in the domain and maps states and goals into actions. We call such functions generalized policies and the question that we address is how to learn suitable representations of generalized policies from data. This question has been addressed recently by Roni Khardon (Technical Report TR-09-97, Harvard, 1997). Khardon represents generalized policies using an ordered list of existentially quantified rules that are inferred from a training set using a version of Rivest's learning algorithm (Machine Learning, vol. 2, no. 3, pp. 229–246, 1987). Here, we follow Khardon's approach but represent generalized policies in a different way using a concept language. We show through a number of experiments in the blocks-world that the concept language yields a better policy using a smaller set of examples and no background knowledge.