Combining skeletal pose with local motion for human activity recognition

  • Authors:
  • Ran Xu;Priyanshu Agarwal;Suren Kumar;Venkat N. Krovi;Jason J. Corso

  • Affiliations:
  • Computer Science and Engineering, State University of New York at Buffalo, NY;Mechanical and Aerospace Engineering, State University of New York at Buffalo, NY;Mechanical and Aerospace Engineering, State University of New York at Buffalo, NY;Mechanical and Aerospace Engineering, State University of New York at Buffalo, NY;Computer Science and Engineering, State University of New York at Buffalo, NY

  • Venue:
  • AMDO'12 Proceedings of the 7th international conference on Articulated Motion and Deformable Objects
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent work in human activity recognition has focused on bottom-up approaches that rely on spatiotemporal features, both dense and sparse. In contrast, articulated motion, which naturally incorporates explicit human action information, has not been heavily studied; a fact likely due to the inherent challenge in modeling and inferring articulated human motion from video. However, recent developments in data-driven human pose estimation have made it plausible. In this paper, we extend these developments with a new middle-level representation called dynamic pose that couples the local motion information directly and independently with human skeletal pose, and present an appropriate distance function on the dynamic poses. We demonstrate the representative power of dynamic pose over raw skeletal pose in an activity recognition setting, using simple codebook matching and support vector machines as the classifier. Our results conclusively demonstrate that dynamic pose is a more powerful representation of human action than skeletal pose.