Lateral interaction in accumulative computation: motion-based grouping method

  • Authors:
  • Antonio Fernández-Caballero;Jose Mira;Ana E. Delgado;Miguel A. Fernández;Maria T. López

  • Affiliations:
  • Universidad de Castilla-La Mancha, E.P.S.A., Albacete, Spain;Universidad Nacional de Educación a Distancia, E.T.S.I. Informática, Madrid, Spain;Universidad Nacional de Educación a Distancia, E.T.S.I. Informática, Madrid, Spain;Universidad de Castilla-La Mancha, E.P.S.A., Albacete, Spain;Universidad de Castilla-La Mancha, E.P.S.A., Albacete, Spain

  • Venue:
  • BVAI'05 Proceedings of the First international conference on Brain, Vision, and Artificial Intelligence
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

To be able to understand the motion of non-rigid objects, techniques in image processing and computer vision are essential for motion analysis. Lateral interaction in accumulative computation for extracting non-rigid blobs and shapes from an image sequence has recently been presented, as well as its application to segmentation from motion. In this paper we show an architecture consisting of five layers based on spatial and temporal coherence in visual motion analysis with application to visual surveillance. The LIAC method used in general task ”spatio-temporal coherent shape building” consists in (a) spatial coherence for brightness-based image segmentation, (b) temporal coherence for motion-based pixel charge computation, (c) spatial coherence for charge-based pixel charge computation, (d) spatial coherence for charge-based blob fusion, and, (e) spatial coherence for charge-based shape fusion. In our case, temporal coherence (in accumulative computation) is understood as a measure of frame to frame motion persistency on a pixel, whilst spatial coherence (in lateral interaction) is a measure of pixel to neighbouring pixels accumulative charge comparison.