Encoding Actions via Quantized Vocabulary of Averaged Silhouettes

  • Authors:
  • Liang Wang;Christopher Leckie

  • Affiliations:
  • -;-

  • Venue:
  • ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Human action recognition from video clips has received increasing attention in recent years. This paper proposes a simple yet effective method for the problem of action recognition. The method aims to encode human actions using the quantized vocabulary of averaged silhouettes that are derived from space-time windowed shapes and implicitly capture local temporal motion as well as global body shape. Experimental results on the publicly available Weizmann dataset have demonstrated that, despite its simplicity, our method is effective for recognizing actions, and is comparable to other state-of-the-art methods.