Learning models of relational MDPs using graph kernels

  • Authors:
  • Florian Halbritter;Peter Geibel

  • Affiliations:
  • Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany;Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany

  • Venue:
  • MICAI'07 Proceedings of the artificial intelligence 6th Mexican international conference on Advances in artificial intelligence
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Relational reinforcement learning is the application of reinforcement learning to structured state descriptions. Model-based methods learn a policy based on a known model that comprises a description of the actions and their effects as well as the reward function. If the model is initially unknown, one might learn the model first and then apply the model-based method (indirect reinforcement learning). In this paper, we propose a method for model-learning that is based on a combination of several SVMs using graph kernels. Indeterministic processes can be dealt with by combining the kernel approach with a clustering technique. We demonstrate the validity of the approach by a range of experiments on various Blocksworld scenarios.