Video annotation using hierarchical Dirichlet process mixture model

  • Authors:
  • Roung-Shiunn Wu;Po-Chun Li

  • Affiliations:
  • Department of Information Management, National Chung Cheng University, Taiwan;Graduate Institute of Information Management, National Chung Cheng University, Taiwan

  • Venue:
  • Expert Systems with Applications: An International Journal
  • Year:
  • 2011

Quantified Score

Hi-index 12.05

Visualization

Abstract

Video annotation has become an important topic to support multimedia information retrieval. Video content analysis using low-level features cannot reduce the gap between low-level features and high level semantic concept. In this study, we propose an approach which combines visual features extracted from visual track of video and keywords extracted from speech transcripts of audio track. We construct a predictive model using hierarchical Dirichlet process mixture model. In the hierarchical model, one more layer is added to exploit sharing visual feature distributions among frames and use the shared information to enhance model learning. At top level the visual features in the groups are shared appropriately by imposing a prior correlation. At the bottom level each visual feature and associated annotation are modeled with mixture distributions. The leaned predictive model allows us to compute a conditional likelihood over words which are used to predict the most likely annotation words for the testing sample. The model achieves high accuracy in video annotation than the model without using hierarchy.