Forward and backward feature selection in gradient-based MDP algorithms

  • Authors:
  • Karina Olga Maizman Bogdan;Valdinei Freire da Silva

  • Affiliations:
  • Escola de Artes, Ciências e Humanidades, Universidade de São Paulo (EACH-USP), São Paulo, Brazil;Escola de Artes, Ciências e Humanidades, Universidade de São Paulo (EACH-USP), São Paulo, Brazil

  • Venue:
  • MICAI'12 Proceedings of the 11th Mexican international conference on Advances in Artificial Intelligence - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In problems modeled as Markov Decision Processes (MDP), knowledge transfer is related to the notion of generalization and state abstraction. Abstraction can be obtained through factored representation by describing states with a set of features. Thus, the definition of the best action to be taken in a state can be easily transferred to similar states, i.e., states with similar features. In this paper we compare forward and backward greedy feature selection to find an appropriate compact set of features for such abstraction, thus facilitating the transfer of knowledge to new problems. We also present heuristic versions of both approaches and compare all of the approaches within a discrete simulated navigation problem.