1994 Special Issue: Design and evolution of modular neural network architectures

  • Authors:
  • Bart L. M. Happel;Jacob M. J. Murre

  • Affiliations:
  • -;-

  • Venue:
  • Neural Networks - Special issue: models of neurodynamics and behavior
  • Year:
  • 1994

Quantified Score

Hi-index 0.00

Visualization

Abstract

To investigate the relations between structure and function in both artificial and natural neural networks, we present a series of simulations and analyses with modular neural networks. We suggest a number of design principles in the form of explicit ways in which neural modules can cooperate in recognition tasks. These results may supplement recent accounts of the relation between structure and function in the brain. The networks used consist of several modules, standard subnetworks that serve as higher order units with a distinct structure and function. The simulations rely on a particular network module called the categorizing and learning module. This module, developed mainly for unsupervised categorization and learning, is able to adjust its local learning dynamics. The way in which modules are interconnected is an important determinant of the learning and categorization behaviour of the network as a whole. Based on arguments derived from neuroscience, psychology, computational learning theory, and hardware implementation, a framework for the design of such modular networks is presented. A number of small-scale simulation studies shows how intermodule connectivity patterns implement ''neural assemblies'' that induce a particular category structure in the network. Learning and categorization improves because the induced categories are more compatible with the structure of the task domain. In addition to structural compatibility, two other principles of design are proposed that underlie information processing in interactive activation networks: replication and recurrence. Because a general theory for relating network architectures to specific neural functions does not exist, we extend the biological metaphor of neural networks, by applying genetic algorithms (a biocomputing method for search and optimization based on natural selection and evolution) to search for optimal modular network architectures for learning a visual categorization task. The best performing network architectures seemed to have reproduced some of the overall characteristics of the natural visual system, such as the organization of coarse and fine processing of stimuli in separate pathways. A potentially important result is that a genetically defined initial architecture cannot only enhance learning and recognition performance, but it can also induce a system to better generalize its learned behaviour to instances never encountered before. This may explain why for many vital learning tasks in organisms only a minimal exposure to relevant stimuli is necessary.