Multi-modal solution for unconstrained news story retrieval

  • Authors:
  • Ehsan Younessian;Deepu Rajan

  • Affiliations:
  • School of Computer Engineering, Nanyang Technological University, Singapore;School of Computer Engineering, Nanyang Technological University, Singapore

  • Venue:
  • MMM'12 Proceedings of the 18th international conference on Advances in Multimedia Modeling
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a multi-modal approach to retrieve associated news stories sharing the same main topic. In the textual domain, we utilize Automatic Speech Recognition (ASR) and refined Optical Character Recognition (OCR) transcripts while in the visual domain we employ a Near Duplicate Keyframe detection method to identify stories with common visual clues. In addition, we adopt another visual representation namely semantic signature, indicating pre-defined semantic concepts included in the news story, to improve the discriminativness of visual modality. We propose a query-class weighting scheme to integrate the retrieval outcomes gained from visual modalities. Experimental results show the distinguishing power of the enhanced representation in individual modalities and the superiority of our fusion approach performance compared to existing strategies.