CONDENSATION—Conditional Density Propagation forVisual Tracking
International Journal of Computer Vision
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
A Mixed-State Condensation Tracker with Automatic Model-Switching
ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
Articulated Body Motion Capture by Stochastic Search
International Journal of Computer Vision
Recovering 3D Human Pose from Monocular Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
3D People Tracking with Gaussian Process Dynamical Models
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Practical motion capture in everyday surroundings
ACM SIGGRAPH 2007 papers
Gaussian Process Dynamical Models for Human Motion
IEEE Transactions on Pattern Analysis and Machine Intelligence
Accurate Human Motion Capture Using an Ergonomics-Based Anthropometric Human Model
AMDO '08 Proceedings of the 5th international conference on Articulated Motion and Deformable Objects
Learning Generative Models for Multi-Activity Body Pose Estimation
International Journal of Computer Vision
Action capture with accelerometers
Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation
Motion capture from body-mounted cameras
ACM SIGGRAPH 2011 papers
Gait identification by using spectrum analysis on state space reconstruction
IWANN'11 Proceedings of the 11th international conference on Artificial neural networks conference on Advances in computational intelligence - Volume Part II
Learning gestures for customizable human-computer interaction in the operating room
MICCAI'11 Proceedings of the 14th international conference on Medical image computing and computer-assisted intervention - Volume Part I
Hi-index | 0.00 |
We propose a method for human full-body pose tracking from measurements of wearable inertial sensors. Since the data provided by such sensors is sparse, noisy and often ambiguous, we use a compound prior model of feasible human poses to constrain the tracking problem. Our model consists of several low-dimensional, activity-specific motion models and an efficient, sampling-based activity switching mechanism. We restrict the search space for pose tracking by means of manifold learning. Together with the portability of wearable sensors, our method allows us to track human full-body motion in unconstrained environments. In fact, we are able to simultaneously classify the activity a person is performing and estimate the full-body pose. Experiments on movement sequences containing different activities show that our method can seamlessly detect activity switches and precisely reconstruct full-body pose from the data of only six wearable inertial sensors.