Soccer without intelligence

  • Authors:
  • Tekin Mericli;H. Levent Akin

  • Affiliations:
  • Department of Computer Engineering, Bo'aziçi University, 34342, Bebek, Istanbul, Turkey;Department of Computer Engineering, Bo'aziçi University, 34342, Bebek, Istanbul, Turkey

  • Venue:
  • ROBIO '09 Proceedings of the 2008 IEEE International Conference on Robotics and Biomimetics
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Robot soccer is an excellent testbed to explore innovative ideas and test the algorithms in multi-agent systems (MAS) research. A soccer team should play in an organized manner in order to score more goals than the opponent, which requires well-developed individual and collaborative skills, such as dribbling the ball, positioning, and passing. However, none of these skills needs to be perfect and they do not require highly complicated models to give satisfactory results. This paper proposes an approach inspired from ants, which are modeled as Braitenberg vehicles for implementing those skills as combinations of very primitive behaviors without using explicit communication and role assignment mechanisms, and applying reinforcement learning to construct the optimal state-action mapping. Experiments demonstrate that a team of robots can indeed learn to play soccer reasonably well without using complex environment models and state representations. After very short training sessions, the team started scoring more than its opponents that use complex behavior codes, and as a result of having very simple state representation, the team could adapt to the strategies of the opponent teams during the games.