Multimodal knowledge-based analysis in multimedia event detection

  • Authors:
  • Ehsan Younessian;Teruko Mitamura;Alexander Hauptmann

  • Affiliations:
  • Nanyang Technological Uni., Singapore;Carnegie Mellon University, Pittsburgh, PA;Carnegie Mellon University, Pittsburgh, PA

  • Venue:
  • Proceedings of the 2nd ACM International Conference on Multimedia Retrieval
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multimedia Event Detection (MED) is a multimedia retrieval task with the goal of finding videos of a particular event in a large-scale Internet video archive, given example videos and text descriptions. We focus on the multimodal knowledge-based analysis in MED where we utilize meaningful and semantic features such as Automatic Speech Recognition (ASR) transcripts, acoustic concept indexing (i.e. 42 acoustic concepts) and visual semantic indexing (i.e. 346 visual concepts) to characterize videos in archive. We study two scenarios where we either do or do not use the provided example videos. In the former, we propose a novel Adaptive Semantic Similarity (ASS) to measure textual similarity between ASR transcripts of videos. We also incorporate acoustic concept indexing and classification to retrieve test videos, specially with too few spoken words. In the latter 'ad-hoc' scenario where we do not have any example video, we use only the event kit description to retrieve test videos ASR transcripts and visual semantics. We also propose an event-specific fusion scheme to combine textual and visual retrieval outputs. Our results show the effectiveness of the proposed ASS and acoustic concept indexing methods and their complimentary role. We also conduct a set of experiments to assess the proposed framework for the 'ad-hoc' scenario.