Analysis of an evolutionary reinforcement learning method in a multiagent domain

  • Authors:
  • Jan Hendrik Metzen;Mark Edgington;Yohannes Kassahun;Frank Kirchner

  • Affiliations:
  • German Research Center for Artificial Intelligence (DFKI), Bremen, Germany;University of Bremen, Bremen, Germany;University of Bremen, Bremen, Germany;German Research Center for Artificial Intelligence (DFKI), Bremen, Germany

  • Venue:
  • Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many multiagent problems comprise subtasks which can be considered as reinforcement learning (RL) problems. In addition to classical temporal difference methods, evolutionary algorithms are among the most promising approaches for such RL problems. The relative performance of these approaches in certain subdomains (e. g. multiagent learning) of the general RL problem remains an open question at this time. In addition to theoretical analysis, benchmarks are one of the most important tools for comparing different RL methods in certain problem domains. A recently proposed multiagent RL benchmark problem is the RoboCup Keepaway benchmark. This benchmark is one of the most challenging multiagent learning problems because its state-space is continuous and high dimensional, and both the sensors and the actuators are noisy. In this paper we analyze the performance of the neuroevolutionary approach called Evolutionary Acquisition of Neural Topologies (EANT) in the Keepaway benchmark, and compare the results obtained using EANT with the results of other algorithms tested on the same benchmark.