Scaling, machine learning, and genetic neural nets

  • Authors:
  • Eric Mjolsness;David H Sharp;Bradley K Alpert

  • Affiliations:
  • Department of Computer Science, Yale University, New Haven, Connecticut 06520 USA;Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 USA;Department of Computer Science, Yale University, New Haven, Connecticut 06520 USA

  • Venue:
  • Advances in Applied Mathematics
  • Year:
  • 1989

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider neural nets whose connections are defined by growth rules taking the form of recursion relations. These are called genetic neural nets. Learning in these nets is achieved by simulated annealing optimization of the net over the space of recursion relation parameters. The method is tested on a previously defined continuous coding problem. Results of control experiments are presented so that the success of the method can be judged. Genetic neural nets implement the ideas of scaling and parsimony, features which allow generalization in machine learning.