Tracking human motion and actions for interactive robots

  • Authors:
  • Odest Chadwicke Jenkins;German González;Matthew Maverick Loper

  • Affiliations:
  • Brown University, Providence, RI;Computer Vision Lab., EPFL, Lausanne, Switzerland;Brown University, Providence, RI

  • Venue:
  • Proceedings of the ACM/IEEE international conference on Human-robot interaction
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

A method is presented for kinematic pose estimation and action recognition from monocular robot vision through the use of dynamical human motion vocabularies. We propose the utilization of dynamical motion vocabularies towards bridging the decision making of observed humans and information from robot sensing. Our motion vocabulary is comprised of learned primitives that structure the action space for decision making and describe human movement dynamics. Given image observations over time, each primitive infers on pose independently using its prediction density on movement dynamics in the context of a particle filter. Pose estimates from a set of primitives inferencing in parallel are arbitrated to estimate the action being performed. The efficacy of our approach is demonstrated through tracking and action recognition over extended motion trials. Results evidence the robustness of the algorithm with respect to unsegmented multi-action movement, movement speed, and camera viewpoint.