Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
CONDENSATION—Conditional Density Propagation forVisual Tracking
International Journal of Computer Vision
Parametric Hidden Markov Models for Gesture Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
An HMM-Based Threshold Model Approach for Gesture Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
A framework for recognizing the simultaneous aspects of American sign language
Computer Vision and Image Understanding - Modeling people toward vision-based underatanding of a person's shape, appearance, and movement
Bayesian Networks and Decision Graphs
Bayesian Networks and Decision Graphs
View-Invariant Representation and Recognition of Actions
International Journal of Computer Vision
Vision-Based Gesture Recognition: A Review
GW '99 Proceedings of the International Gesture Workshop on Gesture-Based Communication in Human-Computer Interaction
Extraction and Classification of Visual Motion Patterns for Hand Gesture Recognition
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
DigitEyes: Vision-Based Human Hand Tracking
DigitEyes: Vision-Based Human Hand Tracking
Dense estimation and object-based segmentation of the optical flow with robust techniques
IEEE Transactions on Image Processing
Hi-index | 0.00 |
A method for extraction of mid-level semantics from sign language videos is proposed, by employing high level domain knowledge. The semantics concern labelling of the depicted head and hands as well as the occlusion events, which are essential for interpretation and for higher level semantic indexing. A Bayesian network is employed to bridge in a probabilistic fashion the gap between the high level knowledge about the valid spatiotemporal configurations of the human body and the extractor. The approach is applied here in sign language videos, but it can be generalised to any case, where semantically rich information can be derived from gesture.