An integrated framework for human activity recognition

  • Authors:
  • Hong Cao;Minh Nhut Nguyen;Clifton Phua;Shonali Krishnaswamy;Xiao-Li Li

  • Affiliations:
  • Institute for Infocomm Research, A*STAR;Institute for Infocomm Research, A*STAR;Institute for Infocomm Research, A*STAR;Institute for Infocomm Research, A*STAR and Monash University;Institute for Infocomm Research, A*STAR

  • Venue:
  • Proceedings of the 2012 ACM Conference on Ubiquitous Computing
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

This poster presents an integrated framework to enable using standard non-sequential machine learning tools for accurate multi-modal activity recognition. Our framework contains simple pre- and post-classification strategies such as class-imbalance correction on the learning data using structure preserving oversampling, leveraging the sequential nature of sensory data using smoothing of the predicted label sequence and classifier fusion, respectively, for improved performance. Through evaluation on recent publicly-available OPPORTUNITY activity datasets comprising of a large amount of multi-dimensional, continuous-valued sensory data, we show that our proposed strategies are effective in improving the performance over common techniques such as One Nearest Neighbor (1NN) and Support Vector Machines (SVM). Our framework also shows better performance over sequential probabilistic models, such as Conditional Random Field (CRF) and Hidden Markov Models (HMM) and when these models are used as meta-learners.