A study on hierarchical modular reinforcement learning for multi-agent pursuit problem based on relative coordinate states

  • Authors:
  • Tatsuya Wada;Takuya Okawa;Toshihiko Watanabe

  • Affiliations:
  • Graduate School of Engineering, Osaka Electro-Communication University, Neyagawa, Osaka, Japan;Graduate School of Engineering, Osaka Electro-Communication University, Neyagawa, Osaka, Japan;Faculty of Engineering, Osaka Electro-Communication University, Neyagawa, Osaka, Japan

  • Venue:
  • CIRA'09 Proceedings of the 8th IEEE international conference on Computational intelligence in robotics and automation
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In order to realize intelligent agent such as autonomous mobile robots, Reinforcement Learning is one of the necessary techniques in behavior control system. However, applying the reinforcement learning to actual sized problem, the "curse of dimensionality" problem in partition of sensory states should be avoided maintaining computational efficiency. In multi-agent reinforcement learning, the problem is emerged owing to the high dimensionality of each agent states. We apply the hierarchical modular reinforcement learning in order to deal with the dimensional problem and task decomposition. In this study, we focus on investigation of the learning performance of agent that represents the input states in relative coordinate system. We show effectiveness of proposed learning algorithm based on relative expressions with limited view through numerical experiments of the pursuit problem.