Monte-Carlo tree search for Bayesian reinforcement learning

  • Authors:
  • Ngo Anh Vien;Wolfgang Ertel;Viet-Hung Dang;Taechoong Chung

  • Affiliations:
  • Institute of Artificial Intelligence, Ravensburg-Weingarten University of Applied Sciences, Weingarten, Germany 88250;Institute of Artificial Intelligence, Ravensburg-Weingarten University of Applied Sciences, Weingarten, Germany 88250;Research and Development Center for Science and Technology, DuyTan University, Da Nang, Vietnam;Department of Computer Engineering, Kyung Hee University, Seoul, South Korea

  • Venue:
  • Applied Intelligence
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Bayesian model-based reinforcement learning can be formulated as a partially observable Markov decision process (POMDP) to provide a principled framework for optimally balancing exploitation and exploration. Then, a POMDP solver can be used to solve the problem. If the prior distribution over the environment's dynamics is a product of Dirichlet distributions, the POMDP's optimal value function can be represented using a set of multivariate polynomials. Unfortunately, the size of the polynomials grows exponentially with the problem horizon. In this paper, we examine the use of an online Monte-Carlo tree search (MCTS) algorithm for large POMDPs, to solve the Bayesian reinforcement learning problem online. We will show that such an algorithm successfully searches for a near-optimal policy. In addition, we examine the use of a parameter tying method to keep the model search space small, and propose the use of nested mixture of tied models to increase robustness of the method when our prior information does not allow us to specify the structure of tied models exactly. Experiments show that the proposed methods substantially improve scalability of current Bayesian reinforcement learning methods.