Articulated Object Tracking via a Genetic Algorithm
EMMCVPR '01 Proceedings of the Third International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition
Robotic smart house to assist people with movement disabilities
Autonomous Robots
MAP ZDF segmentation and tracking using active stereo vision: Hand tracking case study
Computer Vision and Image Understanding
Achieving fluency through perceptual-symbol practice in human-robot collaboration
Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction
A novel face and hands tracking in a complex background
CIMMACS'06 Proceedings of the 5th WSEAS International Conference on Computational Intelligence, Man-Machine Systems and Cybernetics
Anticipatory perceptual simulation for human-robot joint practice: theory and application study
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
Hi-index | 0.00 |
Human motion can be understood on many levels. The most basic level is the notion that humans are collections of things that have predictable visual appearance. Next is the notion that humans exist in a physical universe, as a consequence of this, a large part of human motion can be modeled and predicted with the laws of physics. Finally there is the notion that humans utilize muscles to actively shape purposeful motion. We employ a recursive framework for real-time, the tracking of human motion that enables pixel-level, probabilistic processes to take advantage of the contextual knowledge encoded in the higher-level models, including models of dynamic constraints on human motion. We will show that models of purposeful action arise naturally from this framework, and further, that those models can be used to improve the perception of human motion. Results are shown that demonstrate automatic discovery of features in this new feature space.