On the road to invariant object recognition: How cortical area V2 transforms absolute to relative disparity during 3D vision

  • Authors:
  • Stephen Grossberg;Karthik Srinivasan;Arash Yazdanbakhsh

  • Affiliations:
  • -;-;-

  • Venue:
  • Neural Networks
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Invariant recognition of objects depends on a hierarchy of cortical stages that build invariance gradually. Binocular disparity computations are a key part of this transformation. Cortical area V1 computes absolute disparity, which is the horizontal difference in retinal location of an image in the left and right foveas. Many cells in cortical area V2 compute relative disparity, which is the difference in absolute disparity of two visible features. Relative, but not absolute, disparity is invariant under both a disparity change across a scene and vergence eye movements. A neural network model is introduced which predicts that shunting lateral inhibition of disparity-sensitive layer 4 cells in V2 causes a peak shift in cell responses that transforms absolute disparity from V1 into relative disparity in V2. This inhibitory circuit has previously been implicated in contrast gain control, divisive normalization, selection of perceptual groupings, and attentional focusing. The model hereby links relative disparity to other visual functions and thereby suggests new ways to test its mechanistic basis. Other brain circuits are reviewed wherein lateral inhibition causes a peak shift that influences behavioral responses.