Stability and sensitivity of bottom-up visual attention for dynamic scene analysis

  • Authors:
  • Yukie Nagai

  • Affiliations:
  • Research Institute for Cognition and Robotics, Bielefeld University, Bielefeld, Germany

  • Venue:
  • IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents an architecture extending bottom-up visual attention for dynamic scene analysis. In dynamic scenes, particularly when learning actions from demonstrations, robots have to stably focus on the relevant movement by disregarding surrounding noises, but still maintain sensitivity to a new relevant movement, which might occur in the surroundings. In order to meet the contradictory requirements of stability and sensitivity for attention, this paper introduces biologically-inspired mechanisms for retinal filtering and stochastic attention selection. The former reduces the complexity of peripheral signals by filtering an input image. It results in enhancing bottom-up saliency in the fovea as well as in detecting only prominent signals from the periphery. The latter allows robots to shift attention to a less but still salient location in the periphery, which is likely relevant to the demonstrated action. Integrating these mechanisms with computation for bottom-up saliency enables robots to extract important action sequences from task demonstrations. Experiments with a simulated and a natural scene show better performance of the proposed model than comparative models.