Relating number of processing elements in a sparse distributed memory model to learning rate and generalization

  • Authors:
  • Richard M. Evans;Alvin J. Surkan

  • Affiliations:
  • Performance and Task Division, Defense Training and Performance Data Center, Orlando, Florida;Department of Computer Science, University of Nebraska, Lincoln, Nebraska

  • Venue:
  • APL '91 Proceedings of the international conference on APL '91
  • Year:
  • 1991

Quantified Score

Hi-index 0.00

Visualization

Abstract

A simulated neural network was developed with APL on an 80386 microcomputer. The network was configured to associate task descriptions with 10 categories of military occupational specialties. The number of processing elements in the problem was varied. Increasing the number of processors increased the speed of learning in the simulation. Generalization was not significantly different for various numbers of processing elements except for one intermediate number at which generalization occurred about 15 percent higher. Analysis of the performance of a trained network suggests that low level, natural language understanding is one form of text processing which promises to become an important application area for neural model-based computing.