Kernel-Based reinforcement learning

  • Authors:
  • Guanghua Hu;Yuqin Qiu;Liming Xiang

  • Affiliations:
  • School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P.R. China;School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, P.R. China;Department of Management Sciences, City University of Hong Kong, Kowloon, Hong Kong

  • Venue:
  • ICIC'06 Proceedings of the 2006 international conference on Intelligent Computing - Volume Part I
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the problem of approximating the cost-to-go functions in reinforcement learning. By mapping the state implicitly into a feature space, we perform a simple algorithm in the feature space, which corresponds to a complex algorithm in the original state space. Two kernel-based reinforcement learning algorithms, the ε -insensitive kernel based reinforcement learning (ε – KRL) and the least squares kernel based reinforcement learning (LS-KRL) are proposed. An example shows that the proposed methods can deal effectively with the reinforcement learning problem without having to explore many states.