Evolving neural networks to play checkers without relying on expert knowledge

  • Authors:
  • K. Chellapilla;D. B. Fogel

  • Affiliations:
  • Dept. of Electr. & Comput. Eng., California Univ., San Diego, La Jolla, CA;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

An experiment was conducted where neural networks compete for survival in an evolving population based on their ability to play checkers. More specifically, multilayer feedforward neural networks were used to evaluate alternative board positions and games were played using a minimax search strategy. At each generation, the extant neural networks were paired in competitions and selection was used to eliminate those that performed poorly relative to other networks. Offspring neural networks were created from the survivors using random variation of all weights and bias terms. After a series of 250 generations, the best-evolved neural network was played against human opponents in a series of 90 games on an Internet website. The neural network was able to defeat two expert-level players and played to a draw against a master. The final rating of the neural network placed it in the “Class A” category using a standard rating system. Of particular importance in the design of the experiment was the fact that no features beyond the piece differential were given to the neural networks as a priori knowledge. The process of evolution was able to extract all of the additional information required to play at this level of competency. It accomplished this based almost solely on the feedback offered in the final aggregated outcome of each game played (i.e., win, lose, or draw). This procedure stands in marked contrast to the typical artifice of explicitly injecting expert knowledge into a game-playing program