Simultaneous Localization and Recognition of Dynamic Hand Gestures
WACV-MOTION '05 Proceedings of the IEEE Workshop on Motion and Video Computing (WACV/MOTION'05) - Volume 2 - Volume 02
Real-time hand tracking using a mean shift embedded particle filter
Pattern Recognition
Multimodal human-computer interaction: A survey
Computer Vision and Image Understanding
BoostMap: An Embedding Method for Efficient Nearest Neighbor Retrieval
IEEE Transactions on Pattern Analysis and Machine Intelligence
Nearest neighbor search methods for handshape recognition
Proceedings of the 1st international conference on PErvasive Technologies Related to Assistive Environments
AMDO '08 Proceedings of the 5th international conference on Articulated Motion and Deformable Objects
A database-based framework for gesture recognition
Personal and Ubiquitous Computing
Vision-based infotainment user determination by hand recognition for driver assistance
IEEE Transactions on Intelligent Transportation Systems
Vision-based hand-gesture applications
Communications of the ACM
Multimodal human computer interaction: a survey
ICCV'05 Proceedings of the 2005 international conference on Computer Vision in Human-Computer Interaction
Accurate and efficient gesture spotting via pruning and subgesture reasoning
ICCV'05 Proceedings of the 2005 international conference on Computer Vision in Human-Computer Interaction
Experiments with computer vision methods for hand detection
Proceedings of the 4th International Conference on PErvasive Technologies Related to Assistive Environments
Scale space based grammar for hand detection
IWICPAS'06 Proceedings of the 2006 Advances in Machine Vision, Image Processing, and Pattern Analysis international conference on Intelligent Computing in Pattern Analysis/Synthesis
ADCONS'11 Proceedings of the 2011 international conference on Advanced Computing, Networking and Security
A method for hand detection using internal features and active boosting-based learning
Proceedings of the Fourth Symposium on Information and Communication Technology
Hi-index | 0.02 |
In gesture and sign language video sequences, hand motion tends to be rapid, and hands frequently appear in front of each other or in front of the face. Thus, hand location is often ambiguous, and naive color-based hand tracking is insufficient. To improve tracking accuracy, some methods employ a prediction-update framework, but such methods require careful initialization of model parameters, and tend to drift and lose track in extended sequences. In this paper, a temporal filtering framework for hand tracking is proposed that can initialize and reset itself without human intervention. In each frame, simple features like color and motion residue are exploited to identify multiple candidate hand locations. The temporal filter then uses the Viterbi algorithm to select among the candidates from frame to frame. The resulting tracking system can automatically identify video trajectories of unambiguous hand motion, and detect frames where tracking becomes ambiguous because of occlusions or overlaps. Experiments on video sequences of several hundred frames in duration demonstrate the system's ability to track hands robustly, to detect and handle tracking ambiguities, and to extract the trajectories of unambiguous hand motion.