Variable Hidden Layer Sizing in Elman Recurrent Neuro-Evolution

  • Authors:
  • Khosrow Kaikhah;Ryan Garlick

  • Affiliations:
  • Department of Computer Science, Southwest Texas State University, San Marcos, TX 78666. kk02@swt.edu;Department of Computer Science, Southwest Texas State University, San Marcos, TX 78666. ryang@seas.smu.edu

  • Venue:
  • Applied Intelligence
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

The relationship between the size of thehidden layer in a neural network andperformance in a particular domain is currentlyan open research issue. Often, the number ofneurons in the hidden layer is chosen empiricallyand subsequently fixed for the training of thenetwork. Fixing the size of the hidden layerlimits an inherent strength of neural networks—theability to generalize experiences from onesituation to another, to adapt to new situations,and to overcome the “brittleness” oftenassociated with traditional artificial intelligencetechniques. This paper proposes an evolutionaryalgorithm to search for network sizes along withweights and connections between neurons.This research builds upon the neuro-evolutiontool SANE, developed by DavidMoriarty. SANE evolves neurons and networkssimultaneously, and is modified in this work inseveral ways, including varying the hidden layersize, and evolving Elman recurrent neuralnetworks for non-Markovian tasks. Thesemodifications allow the evolution of betterperforming and more consistent networks, anddo so more efficiently and faster.SANE, modified with variable networksizing, learns to play modified casino blackjackand develops a successful card countingstrategy. The contributions of this research areup to 8.3% performance increases over fixedhidden layer size models while reducing hiddenlayer processing time by almost 10%, and afaster, more autonomous approach to the scalingof neuro-evolutionary techniques to solvinglarger and more difficult problems.