A Neural Network Based Framework for Audio Scene Analysis in Audio Sensor Networks

  • Authors:
  • Qi Li;Huadong Ma;Dong Zhao

  • Affiliations:
  • Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, Beijing, China 100876;Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, Beijing, China 100876;Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, Beijing, China 100876

  • Venue:
  • PCM '09 Proceedings of the 10th Pacific Rim Conference on Multimedia: Advances in Multimedia Information Processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In recent years, the audio sensor networks have been paid much attention. One of the most important applications of audio sensor networks is audio scene analysis. In this paper, we present a neural network based framework for analyzing the audio scene in the audio sensor networks. In the proposed framework, basic audio events are modeled and detected by Hidden Markov Models (HMMs) in the audio sensor nodes. The cluster head collects the sensory information in its cluster, and then a neural network based approach is proposed to discover the high-level semantic content of the audio context. With the neural network based approach, human knowledge and machine learning are effectively combined together in the semantic inference. That is, the model parameters are learned by statistical learning and then modified manually based on the prior knowledge. We deploy the proposed framework on an audio sensor network and do a series of experiments to evaluate its performance. The experimental results show that our method can work well in the complex real-world situations.