Evolving keepaway soccer players through task decomposition

  • Authors:
  • Shimon Whiteson;Nate Kohl;Risto Miikkulainen;Peter Stone

  • Affiliations:
  • Department of Computer Sciences, The University of Texas at Austin, Austin, Texas;Department of Computer Sciences, The University of Texas at Austin, Austin, Texas;Department of Computer Sciences, The University of Texas at Austin, Austin, Texas;Department of Computer Sciences, The University of Texas at Austin, Austin, Texas

  • Venue:
  • GECCO'03 Proceedings of the 2003 international conference on Genetic and evolutionary computation: PartI
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

In some complex control tasks, learning a direct mapping from an agent's sensors to its actuators is very difficult. For such tasks, decomposing the problem into more manageable components can make learning feasible. In this paper, we provide a task decomposition, in the form of a decision tree, for one such task. We investigate two different methods of learning the resulting subtasks. The first approach, layered learning, trains each component sequentially in its own training environment, aggressively constraining the search. The second approach, coevolution, learns all the subtasks simultaneously from the same experiences and puts few restrictions on the learning algorithm. We empirically compare these two training methodologies using neuro-evolution, a machine learning algorithm that evolves neural networks. Our experiments, conducted in the domain of simulated robotic soccer keepaway, indicate that neuro-evolution can learn effective behaviors and that the less constrained coevolutionary approach outperforms the sequential approach. These results provide new evidence of coevolution's utility and suggest that solution spaces should not be over-constrained when supplementing the learning of complex tasks with human knowledge.