Object-Based access to TV rushes video

  • Authors:
  • Alan F. Smeaton;Gareth J. F. Jones;Hyowon Lee;Noel E. O'Connor;Sorin Sav

  • Affiliations:
  • Centre for Digital Video Processing & Adaptive Information Cluster, Dublin City University, Glasnevin, Dublin 9, Ireland;Centre for Digital Video Processing & Adaptive Information Cluster, Dublin City University, Glasnevin, Dublin 9, Ireland;Centre for Digital Video Processing & Adaptive Information Cluster, Dublin City University, Glasnevin, Dublin 9, Ireland;Centre for Digital Video Processing & Adaptive Information Cluster, Dublin City University, Glasnevin, Dublin 9, Ireland;Centre for Digital Video Processing & Adaptive Information Cluster, Dublin City University, Glasnevin, Dublin 9, Ireland

  • Venue:
  • ECIR'06 Proceedings of the 28th European conference on Advances in Information Retrieval
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent years have seen the development of different modalities for video retrieval. The most common of these are (1) to use text from speech recognition or closed captions, (2) to match keyframes using image retrieval techniques like colour and texture [6] and (3) to use semantic features like “indoor”, “outdoor” or “persons”. Of these, text-based retrieval is the most mature and useful, while image-based retrieval using low-level image features usually depends on matching keyframes rather than whole-shots. Automatic detection of video concepts is receiving much attention and as progress is made in this area we will see consequent impact on the quality of video retrieval. In practice it is the combination of these techniques which realises the most useful, and effective, video retrieval as shown by us repeatedly in TRECVid [5].