Biologically motivated local contextual modulation improves low-level visual feature representations

  • Authors:
  • Xun Shi;Neil D. B. Bruce;John K. Tsotsos

  • Affiliations:
  • Department of Computer Science & Engineering, and, Centre for Vision Research, York University, Toronto, Ontario, Canada;Department of Computer Science & Engineering, and, Centre for Vision Research, York University, Toronto, Ontario, Canada;Department of Computer Science & Engineering, and, Centre for Vision Research, York University, Toronto, Ontario, Canada

  • Venue:
  • ICIAR'12 Proceedings of the 9th international conference on Image Analysis and Recognition - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes a biologically motivated local context operator to improve low-level visual feature representations. The computation borrows the idea from the primate visual system that different visual features are computed with different speeds in the visual system and thus they can positively affect each other via early recurrent modulations. The modulation improves visual representation by suppressing responses with respect to background pixels, cluttered scene parts and image noise. The proposed local contextual computation is fundamentally different from exiting approaches that involve "whole scene" perspectives. Context-modulated visual feature representations are tested in a variety of existing saliency algorithms. Using real images and videos, we quantitatively compare output saliency representations between modulated and non-modulated architectures with respect to human experimental data. Results clearly demonstrate that local contextual modulation has a positive and consistent impact on the saliency computation.