Integrating Audio Visual Data for Human Action Detection

  • Authors:
  • Lili Nurliyana Abdullah;Shahrul Azman Mohd Noah

  • Affiliations:
  • -;-

  • Venue:
  • CGIV '08 Proceedings of the 2008 Fifth International Conference on Computer Graphics, Imaging and Visualisation
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a method which able to integrate audio and visual information for action scene analysis in any movie. The approach is top-down for determining and extract action scenes in video by analyzing both audio and video data. In this paper, we directly modelled the hierarchy and shared structures of human behaviours, and we present a framework of the Hidden Markov model based application for the problem of activity recognition. We proposed a framework for recognizing actions by measuring human action-based information from video with the following characteristics: the method deals with both visual and auditory information, and captures both spatial and temporal characteristics; and the extracted features are natural, in the sense that they are closely related to the human perceptual processing. Our effort was to implementing idea of action identification by extracting syntactic properties of a video such as edge feature extraction, colour distribution, audio and motion vectors. In this paper, we present a two layers hierarchical module for action recognition. The first one performs supervised learning to recognize individual actions of participants using low-level visual features. The second layer models actions, using the output of the first layer as observations, and fuse with the high level audio features. Both layers use Hidden Markov model-based approaches for action recognition and clustering, respectively. Our proposed technique characterizes the scenes by integration cues obtained from both the video and audio tracks. We are sure that using joint audio and visual information can significantly improve the accuracy for action detection over using audio or visual information only. This is because multimodal features can resolve ambiguities that are present in a single modality. Besides, we modelled them into multidimensional form.