Cooperative strategy based on adaptive Q-learning for robot soccer systems

  • Authors:
  • Kao-Shing Hwang;Shun-Wen Tan;Chien-Cheng Chen

  • Affiliations:
  • Electr. Eng. Dept., Nat. Chung Cheng Univ., Chia-Yi, Taiwan;-;-

  • Venue:
  • IEEE Transactions on Fuzzy Systems
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

The objective of this paper is to develop a self-learning cooperative strategy for robot soccer systems. The strategy enables robots to cooperate and coordinate with each other to achieve the objectives of offense and defense. Through the mechanism of learning, the robots can learn from experiences in either successes or failures, and utilize these experiences to improve the performance gradually. The cooperative strategy is built using a hierarchical architecture. The first layer of the structure is responsible for assigning each role, that is, how many defenders and sidekicks should be played according to the positional states. The second layer is for the role assignment related to the decision from the previous layer. We develop two algorithms for assignment of the roles, the attacker, the defenders, and the sidekicks. The last layer is the behavior layer in which robots execute their behavior commands and tasks based on their roles. The attacker is responsible for chasing the ball and attacking. The sidekicks are responsible for finding good positions, and the defenders are responsible for defending competitor scoring. The robots' roles are not fixed. They can dynamically exchange their roles with each other. In the aspect of learning, we develop an adaptive Q-learning method which is modified form the traditional Q-learning. A simple ant experiment shows that Q-learning is more effective than the traditional techniques, and it is also successfully applied to the learning of the cooperative strategy.