Viewpoint manifolds for action recognition

  • Authors:
  • Richard Souvenir;Kyle Parrigan

  • Affiliations:
  • Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC;Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC

  • Venue:
  • Journal on Image and Video Processing - Special issue on video-based modeling, analysis, and recognition of human motion
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Action recognition from video is a problem that has many important applications to human motion analysis. In real-world settings, the viewpoint of the camera cannot always be fixed relative to the subject, so view-invariant action recognition methods are needed. Previous view-invariant methods use multiple cameras in both the training and testing phases of action recognition or require storing many examples of a single action from multiple viewpoints. In this paper, we present a framework for learning a compact representation of primitive actions (e.g., walk, punch, kick, sit) that can be used for video obtained from a single camera for simultaneous action recognition and viewpoint estimation. Using our method, which models the low-dimensional structure of these actions relative to viewpoint, we show recognition rates on a publicly available dataset previously only achieved using multiple simultaneous views.