Original Contribution: CALM: Categorizing and learning module

  • Authors:
  • Jacob M. J. Murre;R. Hans Phaf;Gezinus Wolters

  • Affiliations:
  • -;-;-

  • Venue:
  • Neural Networks
  • Year:
  • 1992

Quantified Score

Hi-index 0.00

Visualization

Abstract

A new procedure (CALM: Categorizing and Learning Module) is introduced for unsupervised learning in modular neural networks. The work described addresses a number of problems in connectionist modeling, such as lack of speed, lack of stability, inability to learn either with or without supervision, and the inability to both discriminate between and generalize over patterns. CALM is a single module that can be used to construct larger networks. A CALM module consists of pairs of excitatory Representation- and inhibitory Veto-nodes, and an Arousal-node. Because of the fixed internal wiring pattern of a module, the Arousal-node is sensitive to the novelty of the input pattern. The activation of the Arousal-node determines two psychologically motivated types of learning operating in the module: elaboration learning, which implies a high learning rate and the distribution of nonspecific, random activations in the module, and activation learning, which has only base rate learning without random activations. The learning rule used is a modified version of a rule described by Grossberg. The workings of CALM networks are illustrated in a number of simulations. It is shown that a CALM module quickly reaches a categorization, even with new patterns. Though categorization and learning are relatively fast compared to other models, CALM modules do not suffer from excessive plasticity. They are also shown to be capable of both discriminating between and generalizing over patterns. When presented with a pattern set exceeding the number of Representation-nodes, similar patterns are assigned to the same node. Multi-modular simulations showed that with supervised learning an average of 1.6 presentations sufficed to learn the EXOR function. Moreover, an unsupervised learning version of the McClelland and Rumelhart model successfully simulated a word superiority effect. It is concluded that the incorporation of psychologically and biologically plausible structural and functional characteristics, like modularity, unsupervised (competitive) learning, and a novelty dependent learning rate, may contribute to solving some of the problems often encountered in connectionist modeling.