Acoustic fall detection using Gaussian mixture models and GMM supervectors

  • Authors:
  • Xiaodan Zhuang;Jing Huang;Gerasimos Potamianos;Mark Hasegawa-Johnson

  • Affiliations:
  • Dept. of ECE, University of Illinois at Urbana-Champaign, USA;IBM T.J. Watson Research Center, Yorktown Heights, New York, USA;IBM T.J. Watson Research Center, Yorktown Heights, New York, USA;Dept. of ECE, University of Illinois at Urbana-Champaign, USA

  • Venue:
  • ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a system that detects human falls in the home environment, distinguishing them from competing noise, by using only the audio signal from a single far-field microphone. The proposed system models each fall or noise segment by means of a Gaussian mixture model (GMM) supervector, whose Euclidean distance measures the pairwise difference between audio segments. A support vector machine built on a kernel between GMM supervectors is employed to classify audio segments into falls and various types of noise. Experiments on a dataset of human falls, collected as part of the Netcarity project, show that the method improves fall classification F-score to 67% from 59% of a baseline GMM classifier. The approach also effectively addresses the more difficult fall detection problem, where audio segment boundaries are unknown. Specifically, we employ it to reclassify confusable segments produced by a dynamic programming scheme based on traditional GMMs. Such post-processing improves a fall detection accuracy metric by 5% relative.