Neurocomputing: foundations of research
Neurocomputing: foundations of research
Review of neural networks for speech recognition
Neural Computation
Modular construction of time-delay neural networks for speech recognition
Neural Computation
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Parallel distributed processing: explorations in the microstructure, vol. 2: psychological and biological models
Certain aspects of the anatomy and physiology of the cerebral cortex
Parallel distributed processing
The computer and the brain
Journal of Cognitive Neuroscience
1994 Special Issue: Design and evolution of modular neural network architectures
Neural Networks - Special issue: models of neurodynamics and behavior
Evolutionary Learning of Modular Neural Networks withGenetic Programming
Applied Intelligence
Self-organizing map network as an interactive clustering tool - An application to group technology
Decision Support Systems
Visual affect recognition
Self-Organizing neural networks for signal recognition
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Neural coding strategies and mechanisms of competition
Cognitive Systems Research
Hi-index | 0.00 |
A new procedure (CALM: Categorizing and Learning Module) is introduced for unsupervised learning in modular neural networks. The work described addresses a number of problems in connectionist modeling, such as lack of speed, lack of stability, inability to learn either with or without supervision, and the inability to both discriminate between and generalize over patterns. CALM is a single module that can be used to construct larger networks. A CALM module consists of pairs of excitatory Representation- and inhibitory Veto-nodes, and an Arousal-node. Because of the fixed internal wiring pattern of a module, the Arousal-node is sensitive to the novelty of the input pattern. The activation of the Arousal-node determines two psychologically motivated types of learning operating in the module: elaboration learning, which implies a high learning rate and the distribution of nonspecific, random activations in the module, and activation learning, which has only base rate learning without random activations. The learning rule used is a modified version of a rule described by Grossberg. The workings of CALM networks are illustrated in a number of simulations. It is shown that a CALM module quickly reaches a categorization, even with new patterns. Though categorization and learning are relatively fast compared to other models, CALM modules do not suffer from excessive plasticity. They are also shown to be capable of both discriminating between and generalizing over patterns. When presented with a pattern set exceeding the number of Representation-nodes, similar patterns are assigned to the same node. Multi-modular simulations showed that with supervised learning an average of 1.6 presentations sufficed to learn the EXOR function. Moreover, an unsupervised learning version of the McClelland and Rumelhart model successfully simulated a word superiority effect. It is concluded that the incorporation of psychologically and biologically plausible structural and functional characteristics, like modularity, unsupervised (competitive) learning, and a novelty dependent learning rate, may contribute to solving some of the problems often encountered in connectionist modeling.