Integrating multi-modal content analysis and hyperbolic visualization for large-scale news video retrieval and exploration

  • Authors:
  • H. Luo;J. Fan;S. Satoh;J. Yang;W. Ribarsky

  • Affiliations:
  • Software Engineering Institute, East China Normal University, Shanghai, China;Department of Computer Science, UNC-Charlotte, Charlotte, USA;National Institute of Informatics, Tokyo 101-8430, Japan;Department of Computer Science, UNC-Charlotte, Charlotte, USA;Department of Computer Science, UNC-Charlotte, Charlotte, USA

  • Venue:
  • Image Communication
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we have developed a novel scheme to achieve more effective analysis, retrieval and exploration of large-scale news video collections by performing multi-modal video content analysis and synchronization. First, automatic keyword extraction is performed on news closed captions and audio channels to detect the most interesting news topics (i.e., keywords for news topic interpretation), and the associations among these news topics (i.e., contextual relationships among the news topics) are further determined according to their co-occurrence probabilities. Second, visual semantic items, such as human faces, text captions, video concepts, are extracted automatically by using our semantic video analysis techniques. The news topics are automatically synchronized with the most relevant visual semantic items. In addition, an interestingness weight is assigned for each news topic to characterize its importance. Finally, a novel hyperbolic visualization scheme is incorporated to visualize large-scale news topics according to their associations and interestingness. With a better global overview of large-scale news video collections, users can specify their queries more precisely and explore large-scale news video collections interactively. Our experiments on large-scale news video collections have provided very positive results.