Neural mechanisms for form and motion detection and integration: biology meets machine vision

  • Authors:
  • Heiko Neumann;Florian Raudies

  • Affiliations:
  • Institute for Neural Information Processing, Ulm Univ., Germany;Center for Computational Neuroscience and Neural Technology, Boston Univ.

  • Venue:
  • ECCV'12 Proceedings of the 12th international conference on Computer Vision - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

General-purpose vision systems, either biological or technical, rely on the robust processing of visual data from the sensor array. Such systems need to adapt their processing capabilities to varying conditions, have to deal with noise, and also need to learn task-relevant representations. Here, we describe models of early and mid-level vision. These models are motivated by the layered and hierarchical processing of form and motion information in primate cortex. Core cortical processing principles are: (i) bottom-up processing to build representations of increasing feature specificity and spatial scale, (ii) selective amplification of bottom-up signals by feedback that utilizes spatial, temporal, or task-related context information, and (iii) automatic gain control via center-surround competitive interaction and activity normalization. We use these principles as a framework to design and develop bio-inspired models for form and motion processing. Our models replicate experimental findings and, furthermore, provide a functional explanation for psychophysical and physiological data. In addition, our models successfully process natural images or videos. We show mechanism that group items into boundary representations or estimate visual motions from opaque or transparent surfaces. Our framework suggests a basis for designing bio-inspired models that solve typical computer vision problems and enable the development of neural technology for vision.