Learning of position-invariant object representation across attention shifts

  • Authors:
  • Muhua Li;James J. Clark

  • Affiliations:
  • Centre for Intelligent Machines, McGill University, Montreal, Quebec, Canada;Centre for Intelligent Machines, McGill University, Montreal, Quebec, Canada

  • Venue:
  • WAPCV'04 Proceedings of the Second international conference on Attention and Performance in Computational Vision
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Selective attention shift can help neural networks learn invariance. We describe a method that can produce a network with invariance to changes in visual input caused by attention shifts. Training of the network is controlled by signals associated with attention shifting. A temporal perceptual stability constraint is used to drive the output of the network towards remaining constant across temporal sequences of attention shifts. We use a four-layer neural network model to perform the position-invariant extraction of local features and temporal integration of attention-shift invariant presentations of objects. We present results on both simulated data and real images, to demonstrate that our network can acquire position invariance across a sequence of attention shifts.