Recognizing multiple human activities and tracking full-body pose in unconstrained environments

  • Authors:
  • Loren Arthur Schwarz;Diana Mateus;Nassir Navab

  • Affiliations:
  • Computer Aided Medical Procedures (CAMP), Department of Informatics, Technische Universität München (TUM), Boltzmannstr. 3, 85748 Garching, Germany;Computer Aided Medical Procedures (CAMP), Department of Informatics, Technische Universität München (TUM), Boltzmannstr. 3, 85748 Garching, Germany;Computer Aided Medical Procedures (CAMP), Department of Informatics, Technische Universität München (TUM), Boltzmannstr. 3, 85748 Garching, Germany

  • Venue:
  • Pattern Recognition
  • Year:
  • 2012

Quantified Score

Hi-index 0.01

Visualization

Abstract

Visual observations, such as camera images, are hard to obtain for long-term human motion analysis in unconstrained environments. In this paper, we present a method for human full-body pose tracking and activity recognition from measurements of few body-worn inertial orientation sensors. The sensors make our approach insensitive to illumination and occlusions and permit a person to move freely. Since the data provided by inertial sensors is sparse, noisy and often ambiguous, we use a generative prior model of feasible human poses and movements to constrain the tracking problem. Our model consists of several low-dimensional, activity-specific manifold embeddings that significantly restrict the search space for pose tracking. Using a particle filter, our method continuously explores multiple pose hypotheses in the embedding space. An efficient activity switching mechanism governs the distribution of particles across the activity-specific manifold embeddings. Selecting a pose hypothesis that best explains incoming sensor observations simultaneously allows us to classify the activity a person is performing and to estimate the full-body pose. We also derive an effective measure of predictive confidence that enables detecting anomalous movements. Experiments on a multi-person data set containing several activities show that our method can seamlessly detect activity switches and accurately reconstruct full-body poses from the data of only six wearable inertial sensors.