Modeling the Dynamics of Feature Binding During Object-Selective Attention

  • Authors:
  • Albert L. Rothenstein;John K. Tsotsos

  • Affiliations:
  • Centre for Vision Research and Dept. of Computer Science & Engineering, York University, Toronto, Canada;Centre for Vision Research and Dept. of Computer Science & Engineering, York University, Toronto, Canada

  • Venue:
  • Attention in Cognitive Systems. Theories and Systems from an Interdisciplinary Viewpoint
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a biologically plausible computational model for solving the visual feature binding problem. The binding problem appears to be due to the distributed nature of visual processing in the primate brain, and the gradual loss of spatial information along the processing hierarchy. The model relies on the reentrant connections so ubiquitous in the primate brain to recover spatial information, and thus allows features represented in different parts of the brain to be integrated in a unitary conscious percept. We demonstrate the ability of the Selective Tuning model of visual attention [1] to recover spatial information, and based on this we propose a general solution to the feature binding problem. The solution is used to simulate the results of a recent neurophysiology study on the binding of motion and color. The example demonstrates how the method is able to handle the difficult case of transparency.