Motion features to enhance scene segmentation in active visual attention

  • Authors:
  • María T. López;Antonio Fernández-Caballero;Miguel A. Fernández;José Mira;Ana E. Delgado

  • Affiliations:
  • Departamento de Informática, Escuela Politécnica Superior, Universidad de Castilla-La Mancha, 02071 Albacete, Spain;Departamento de Informática, Escuela Politécnica Superior, Universidad de Castilla-La Mancha, 02071 Albacete, Spain;Departamento de Informática, Escuela Politécnica Superior, Universidad de Castilla-La Mancha, 02071 Albacete, Spain;Departamento de Inteligencia Artificial, Facultad de Ciencias and E.T.S.I. Informática, Universidad Nacional de Educación a Distancia, 28040 Madrid, Spain;Departamento de Inteligencia Artificial, Facultad de Ciencias and E.T.S.I. Informática, Universidad Nacional de Educación a Distancia, 28040 Madrid, Spain

  • Venue:
  • Pattern Recognition Letters
  • Year:
  • 2006

Quantified Score

Hi-index 0.10

Visualization

Abstract

A new computational model for active visual attention is introduced in this paper. The method extracts motion and shape features from video image sequences, and integrates these features to segment the input scene. The aim of this paper is to highlight the importance of the motion features present in our algorithms in the task of refining and/or enhancing scene segmentation in the method proposed. The estimation of these motion parameters is performed at each pixel of the input image by means of the accumulative computation method, using the so-called permanency memories. The paper shows some examples of how to use the ''motion presence'', ''module of the velocity'' and ''angle of the velocity'' motion features, all obtained from accumulative computation method, to adjust different scene segmentation outputs in this dynamic visual attention method.