Near-duplicate video retrieval: Current research and future trends

  • Authors:
  • Jiajun Liu;Zi Huang;Hongyun Cai;Heng Tao Shen;Chong Wah Ngo;Wei Wang

  • Affiliations:
  • The University of Queensland, Australia;The University of Queensland, Australia;The University of Queensland, Australia;The University of Queensland, Australia;City University of Hong Kong, Hong Kong;The University of New South Wales, Australia

  • Venue:
  • ACM Computing Surveys (CSUR)
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

The exponential growth of online videos, along with increasing user involvement in video-related activities, has been observed as a constant phenomenon during the last decade. User's time spent on video capturing, editing, uploading, searching, and viewing has boosted to an unprecedented level. The massive publishing and sharing of videos has given rise to the existence of an already large amount of near-duplicate content. This imposes urgent demands on near-duplicate video retrieval as a key role in novel tasks such as video search, video copyright protection, video recommendation, and many more. Driven by its significance, near-duplicate video retrieval has recently attracted a lot of attention. As discovered in recent works, latest improvements and progress in near-duplicate video retrieval, as well as related topics including low-level feature extraction, signature generation, and high-dimensional indexing, are employed to assist the process. As we survey the works in near-duplicate video retrieval, we comparatively investigate existing variants of the definition of near-duplicate video, describe a generic framework, summarize state-of-the-art practices, and explore the emerging trends in this research topic.