Realistic Human Action Recognition with Audio Context

  • Authors:
  • Qiuxia Wu;Zhiyong Wang;Feiqi Deng;David Dagan Feng

  • Affiliations:
  • -;-;-;-

  • Venue:
  • DICTA '10 Proceedings of the 2010 International Conference on Digital Image Computing: Techniques and Applications
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recognizing human actions in realistic scenes has emerged as a challenging topic due to various aspects such as dynamic backgrounds. In this paper, we present a novel approach to taking audio context into account for better action recognition performance, since audio can provide strong evidence to certain actions such as phone-ringing to answer-phone. At first, classifiers are established for visual and audio modalities, respectively. Specifically, bag of visual-words model is employed to represent human actions in visual modality, a number of audio features are extracted for audio modality, and Support Vector Machine (SVM) is employed as the classification technique. Then, a decision fusion scheme is utilized to fuse classification results from two modalities. Since audio context is not always helpful, two simple yet effective decision rules are developed for selective fusion. Experimental results on the Hollywood Human Actions (HOHA) dataset demonstrate that the proposed approach can achieve better recognition performance than that of integrating scene context. Therefor, our work provides strong confidence to further explore how audio context influences realistic human action recognition.