Segmenting state into entities and its implication for learning

  • Authors:
  • James Henderson

  • Affiliations:
  • University of Exeter, Exeter, United Kingdom

  • Venue:
  • Emergent neural computational architectures based on neuroscience
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

Temporal synchrony of activation spikes has been proposed as the representational code by which the brain segments perceptual patterns into multiple visual objects or multiple auditory sources. In this chapter we look at the implications of this neuroscientific proposal for learning and computation in artificial neural networks. Previous work has defined an artificial neural network model which uses temporal synchrony to represent and learn about multiple entities (Simple Synchrony Networks). These networks Can store arbitrary amounts of information in their internal state by segmenting their representation of state into arbitrarily many entities. They can also generalize what they learn to larger internal states by learning generalizations about individual entities. These claims are empirical demonstrated through results on training a Simple Synchrony Network to do syntactic parsing of real natural language sentences.