Towards fusion of collective knowledge and audio-visual content features for annotating broadcast video

  • Authors:
  • Fréderic Godin;Wesley De Neve;Rik Van de Walle

  • Affiliations:
  • Ghent University - iMinds, Ghent, Belgium;Ghent University - iMinds & KAIST, Ghent, Belgium;Ghent University - iMinds, Ghent, Belgium

  • Venue:
  • Proceedings of the 3rd ACM conference on International conference on multimedia retrieval
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Broadcasters produce vast collections of video content. However, the lack of fine-grained annotations makes it difficult to retrieve video fragments of interest from these vast collections. Indeed, manual annotation of video content is labour-intensive and time-consuming. Moreover, the applicability of algorithms for automatic annotation of video content is limited, given that too many prerequisites need to be fulfilled and that a lot of concepts are unidentifiable. At the same time, people are using social media to share their thoughts about the content they view on television. Therefore, in this Ph.D. research, we plan to investigate novel machine learning-based approaches towards the task of fine-grained annotation of broadcast video content, fusing the collective knowledge present in social media with the output of audio-visual content analysis algorithms.