Evolving strategies using the nearest-neighbor rule and a genetic algorithm

  • Authors:
  • Matthias Fuchs

  • Affiliations:
  • Universität Kaiserslautern, Kaiserslautern, Germany

  • Venue:
  • GECCO '96 Proceedings of the 1st annual conference on Genetic and evolutionary computation
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a method for evolving strategies based on the nearest-neighbor rule. A strategy corresponds to an action selection function which chooses an action depending on the current state of a (reactive) control problem to be solved. Given a set I of state/action pairs, our approach determines the action to be taken in state S' as the action A from (S, A) ∈ I for which S' and S are "nearest" among all states from members of I. The set I is evolved using a genetic algorithm. Our approach exceeds "standard" condition/action rule based approaches regarding the ways the state space can be subdivided. This is achieved without confronting the genetic algorithm---unlike neural networks---with too hard search problems as preliminary experiments with variations of a pursuit (predator--prey) game demonstrate.