Q-Learning with FCMAC in multi-agent cooperation

  • Authors:
  • Kao-Shing Hwang;Yu-Jen Chen;Tzung-Feng Lin

  • Affiliations:
  • Department of Electrical Engineering, National Chung Cheng University, Chia-Yi, Taiwan;Department of Electrical Engineering, National Chung Cheng University, Chia-Yi, Taiwan;Department of Electrical Engineering, National Chung Cheng University, Chia-Yi, Taiwan

  • Venue:
  • ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

In general, Q-learning needs well-defined quantized state spaces and action spaces to obtain an optimal policy for accomplishing a given task. This makes it difficult to be applied to real robot tasks because of poor performance of learned behavior due to the failure of quantization of continuous state and action spaces. In this paper, we proposed a fuzzy-based CMAC method to calculate the contribution of each neighboring state to generate a continuous action value in order to make motion smooth and effective. A momentum term to speed up training has been designed and implemented in a multi-agent system for real robot applications.