Generation, local receptive fields and global convergence improve perceptual learning in connectionist networks

  • Authors:
  • Vasant Honavar;Leonard Uhr

  • Affiliations:
  • Computer Sciences Department, University of Wisconsin-Madison;Computer Sciences Department, University of Wisconsin-Madison

  • Venue:
  • IJCAI'89 Proceedings of the 11th international joint conference on Artificial intelligence - Volume 1
  • Year:
  • 1989

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents and compares results for three types of connectionist networks on perceptual learning tasks: [A] Multi-layered converging networks of neuron-like units, with each unit connected to a small randomly chosen subset of units in the adjacent layers, that learn by re-weighting of their links; [B] Networks of neuron-like units structured into successively larger modules under brain-like topological constraints (such as layered, converging-diverging hierarchies and local receptive fields) that learn by re-weighting of their links; [C] Networks with brain-like structures that learn by generation-discovery, which involves the growth of links and recruiting of units in addition to reweighting of links. Preliminary empirical results from simulation of these networks for perceptual recognition tasks show significant improvements in learning from using brain-like structures (e.g., local receptive fields, global convergence) over networks that lack such structure; further improvements in learning result from the use of generation in addition to reweighting of links.