Pfinder: Real-Time Tracking of the Human Body
IEEE Transactions on Pattern Analysis and Machine Intelligence
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 4 - Volume 04
Pedestrian Tracking with Shoe-Mounted Inertial Sensors
IEEE Computer Graphics and Applications
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Orient-2: a realtime wireless posture tracking system using local orientation estimation
Proceedings of the 4th workshop on Embedded networked sensors
Hybrid tracking of human operators using IMU/UWB data fusion by a Kalman filter
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
Learning to walk through imitation
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Comparative study of segmentation of periodic motion data for mobile gait analysis
WH '10 Wireless Health 2010
Real-time human pose recognition in parts from single depth images
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
From posture to motion: the challenge for real time wireless inertial motion capture
Proceedings of the Fifth International Conference on Body Area Networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.00 |
Despite recent advances in 3-D motion capture, the problem of simultaneously tracking human posture and position in an unconstrained environment remains open. Optical systems provide both types of information, but are confined to a restricted area of capture. Inertial sensing alleviates this restriction, but at the expense of capturing only relative (postural) and not absolute (positional) information. In this paper, we propose an algorithm combining the relative merits of these systems to track both position and posture in challenging environments. Offline, we combine an optical (Kinect) and an inertial sensing (Orient-4) platform to learn a mapping from posture variations to translations, which we encode as a translation manifold. Online, the optical source is removed, and the learned mapping is used to infer positions using the postures computed by the inertial sensors. We first evaluate our approach in simulation, on motion sequences with ground-truth positions for error estimation. Then, the method is deployed on physical sensing platforms to track human subjects. The proposed algorithm is shown to yield a lower average cumulative error than comparable position tracking methods, such as double integration of accelerometer data, on both simulated and real sensory data, and in a variety of motions and capture settings.