Aggregating low-level features for human action recognition

  • Authors:
  • Kyle Parrigan;Richard Souvenir

  • Affiliations:
  • Department of Computer Science, University of North Carolina at Charlotte;Department of Computer Science, University of North Carolina at Charlotte

  • Venue:
  • ISVC'10 Proceedings of the 6th international conference on Advances in visual computing - Volume Part I
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent methods for human action recognition have been effective using increasingly complex, computationally-intensive models and algorithms. There has been growing interest in automated video analysis techniques which can be deployed onto resource-constrained distributed smart camera networks. In this paper, we introduce a multi-stage method for recognizing human actions (e.g., kicking, sitting, waving) that uses the motion patterns of easy-to-compute, low-level image features. Our method is designed for use on resource-constrained devices and can be optimized for real-time performance. In single-view and multi-view experiments, our method achieves 78% and 84% accuracy, respectively, on a publicly available data set.