Improving search efficiency in the action space of an instance-based reinforcement learning technique for multi-robot systems

  • Authors:
  • Toshiyuki Yasuda;Kazuhiro Ohkura

  • Affiliations:
  • Graduate School of Engineering, Hiroshima University, Higashi-Hiroshima, Hiroshima, Japan;Graduate School of Engineering, Hiroshima University, Higashi-Hiroshima, Hiroshima, Japan

  • Venue:
  • ECAL'07 Proceedings of the 9th European conference on Advances in artificial life
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We have developed a new reinforcement learning technique called Bayesian-discrimination-function-based reinforcement learning (BRL). BRL is unique, in that it not only learns in the predefined state and action spaces, but also simultaneously changes their segmentation. BRL has proven to be more effective than other standard RL algorithms in dealing with multi-robot system (MRS) problems, where the learning environment is naturally dynamic. This paper introduces an extended form of BRL that improves its learning efficiency. Instead of generating a random action when a robot encounters an unknown situation, the extended BRL generates an action calculated by a linear interpolation among the rules with high similarity to the current sensory input. In both physical experiments and computer simulations, the extended BRL showed higher search efficiency than the standard BRL.