A novel framework for robust annotation and retrieval in video sequences

  • Authors:
  • Arasanathan Anjulan;Nishan Canagarajah

  • Affiliations:
  • Department of Electrical and Electronic Engineering, University of Bristol, Bristol, UK;Department of Electrical and Electronic Engineering, University of Bristol, Bristol, UK

  • Venue:
  • CIVR'06 Proceedings of the 5th international conference on Image and Video Retrieval
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes a method for automatic video annotation and scene retrieval based on local region descriptors. A novel framework is proposed for combined video segmentation, content extraction and retrieval. A similarity measure, previously proposed by the authors based on local region features, is used for video segmentation. The local regions are tracked throughout a shot and stable features are extracted. The conventional key frame method is replaced with these stable local features to characterise different shots. Compared to previous video annotation approaches, the proposed method is highly robust to camera and object motions and can withstand severe illumination changes and spatial editing. We apply the proposed framework to shot cut detection and scene retrieval applications and demonstrate superior performance compared to existing methods. Furthermore as segmentation and content extraction are performed within the same step, the overall computational complexity of the system is considerably reduced.