Temporal video indexing based on early vision using laguerre filters

  • Authors:
  • Carlos Joel Rivero-Moreno;Stéphane Bres

  • Affiliations:
  • Lab. d'InfoRmatique en Images et Systèmes d'information, INSA de Lyon, Bât. Jules Verne, LIRIS, UMR 5205 CNRS, Villeurbanne, France;Lab. d'InfoRmatique en Images et Systèmes d'information, INSA de Lyon, Bât. Jules Verne, LIRIS, UMR 5205 CNRS, Villeurbanne, France

  • Venue:
  • CAIP'05 Proceedings of the 11th international conference on Computer Analysis of Images and Patterns
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Visual information of videos is based on spatial and temporal extents. However, most of video indexing techniques work in the spatial extent. Thus, spatial features are extracted from individual frames and then temporal information is introduced by their temporal evolution or tracking in order to construct motion vectors that serve as temporal features. In this paper we present a novel approach for video indexing based on temporal features extracted basically from the temporal extent. The approach is based on Laguerre filters of the Laguerre transform, which is a polynomial transform, that preserve the causality constraint in the temporal domain and model the early vision stages (V1 and MT) in the visual system for extraction and representation of visual motion (temporal events). The motion pathway is constructed by subsampling the spatial low-pass versions of frames (spatial integration) and by decomposing subsequently local temporal vectors at spatial positions. Results encourage our model for video indexing and retrieval.