Multimodal Video Indexing and Retrieval Using Directed Information

  • Authors:
  • Xu Chen;Alfred O. Hero, III;Silvio Savarese

  • Affiliations:
  • Department of Electrical Engineering and Computer Science, University of Michigan at Ann Arbor, Ann Arbor, MI, USA;Department of Electrical Engineering and Computer Science, University of Michigan at Ann Arbor, Ann Arbor, MI, USA;Department of Electrical Engineering and Computer Science, University of Michigan at Ann Arbor, Ann Arbor, MI, USA

  • Venue:
  • IEEE Transactions on Multimedia
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a novel framework for multimodal video indexing and retrieval using shrinkage optimized directed information assessment (SODA) as similarity measure. The directed information (DI) is a variant of the classical mutual information which attempts to capture the direction of information flow that videos naturally possess. It is applied directly to the empirical probability distributions of both audio-visual features over successive frames. We utilize RASTA-PLP features for audio feature representation and SIFT features for visual feature representation. We compute the joint probability density functions of audio and visual features in order to fuse features from different modalities. With SODA, we further estimate the DI in a manner that is suitable for high dimensional features $p$ and small sample size $n$ (large $p$ small $n$ ) between pairs of video-audio modalities. We demonstrate the superiority of the SODA approach in video indexing, retrieval, and activity recognition as compared to the state-of-the-art methods such as hidden Markov models (HMM), support vector machine (SVM), cross-media indexing space (CMIS), and other noncausal divergence measures such as mutual information (MI). We also demonstrate the success of SODA in audio and video localization and indexing/retrieval of data with missaligned modalities.