Learning translation invariant recognition in massively parallel networks
Volume I: Parallel architectures on PARLE: Parallel Architectures and Languages Europe
IEEE Transactions on Systems, Man and Cybernetics - Special issue on artificial intelligence
Certain aspects of the anatomy and physiology of the cerebral cortex
Parallel distributed processing: explorations in the microstructure of cognition, vol. 2
Highly parallel, hierarchical, recognition cone perceptual structures
Parallel computer vision
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Layered "Recognition Cone" Networks That Preprocess, Classify, and Describe
IEEE Transactions on Computers
Hi-index | 0.00 |
This paper presents and compares results for three types of connectionist networks on perceptual learning tasks: [A] Multi-layered converging networks of neuron-like units, with each unit connected to a small randomly chosen subset of units in the adjacent layers, that learn by re-weighting of their links; [B] Networks of neuron-like units structured into successively larger modules under brain-like topological constraints (such as layered, converging-diverging hierarchies and local receptive fields) that learn by re-weighting of their links; [C] Networks with brain-like structures that learn by generation-discovery, which involves the growth of links and recruiting of units in addition to reweighting of links. Preliminary empirical results from simulation of these networks for perceptual recognition tasks show significant improvements in learning from using brain-like structures (e.g., local receptive fields, global convergence) over networks that lack such structure; further improvements in learning result from the use of generation in addition to reweighting of links.