Intelligent multi-media interface technology
Intelligent user interfaces
User and discourse models for multimodal communication
Intelligent user interfaces
The application of natural language models to intelligent multimedia
Intelligent multimedia interfaces
Integrating simultaneous input from speech, gaze, and hand gestures
Intelligent multimedia interfaces
Evaluation of the CyberGlove as a whole-hand input device
ACM Transactions on Computer-Human Interaction (TOCHI)
A generic platform for addressing the multimodal challenge
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Artificial Intelligence Review - Special issue on integration of natural language and vision processing: recent advances
QuickSet: multimodal interaction for distributed applications
MULTIMEDIA '97 Proceedings of the fifth ACM international conference on Multimedia
Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
A hand gesture interface device
CHI '87 Proceedings of the SIGCHI/GI Conference on Human Factors in Computing Systems and Graphics Interface
Using space to describe space: Perspective inspeech, sign, and gesture
Spatial Cognition and Computation
Hand Tension as a Gesture Segmentation Cue
Proceedings of Gesture Workshop on Progress in Gestural Interaction
Gesture Recognition of the Upper Limbs - From Signal to Symbol
Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction
Velocity Profile Based Recognition of Dynamic Gestures with Discrete Hidden Markov Models
Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction
High Performance Real-Time Gesture Recognition Using Hidden Markov Models
Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction
Vision-Based Gesture Recognition: A Review
GW '99 Proceedings of the International Gesture Workshop on Gesture-Based Communication in Human-Computer Interaction
Communicative Rhythm in Gesture and Speech
GW '99 Proceedings of the International Gesture Workshop on Gesture-Based Communication in Human-Computer Interaction
Gesture Recognition for Visually Mediated Interaction
GW '99 Proceedings of the International Gesture Workshop on Gesture-Based Communication in Human-Computer Interaction
Real-Time Gesture Recognition by Means of Hybrid Recognizers
GW '01 Revised Papers from the International Gesture Workshop on Gesture and Sign Languages in Human-Computer Interaction
The human-computer interaction handbook
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Model-Based Human Body Tracking
ICPR '02 Proceedings of the 16 th International Conference on Pattern Recognition (ICPR'02) Volume 1 - Volume 1
Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality
Proceedings of the 5th international conference on Multimodal interfaces
Pointing gesture recognition based on 3D-tracking of face, hands and head orientation
Proceedings of the 5th international conference on Multimodal interfaces
Computer vision in the interface
Communications of the ACM - Multimodal interfaces that flex, adapt, and persist
A probabilistic approach to reference resolution in multimodal user interfaces
Proceedings of the 9th international conference on Intelligent user interfaces
Unification-based multimodal integration
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
Multimodal model integration for sentence unit detection
Proceedings of the 6th international conference on Multimodal interfaces
International Journal of Computer Vision
Using maximum entropy (ME) model to incorporate gesture cues for SU detection
Proceedings of the 8th international conference on Multimodal interfaces
Salience modeling based on non-verbal modalities for spoken language understanding
Proceedings of the 8th international conference on Multimodal interfaces
Understanding Coverbal Ionic Gestures in Shape Descriptions
Understanding Coverbal Ionic Gestures in Shape Descriptions
Real-time hand tracking using a mean shift embedded particle filter
Pattern Recognition
Journal of Cognitive Neuroscience
Semiotic schemas: A framework for grounding language in action and perception
Artificial Intelligence - Special volume on connecting language to the world
Gesture spotting in continuous whole body action sequences using discrete hidden markov models
GW'05 Proceedings of the 6th international conference on Gesture in Human-Computer Interaction and Simulation
Deixis: how to determine demonstrated objects using a pointing cone
GW'05 Proceedings of the 6th international conference on Gesture in Human-Computer Interaction and Simulation
Gesture features for coreference resolution
MLMI'06 Proceedings of the Third international conference on Machine Learning for Multimodal Interaction
Prosody based audiovisual coanalysis for coverbal gesture recognition
IEEE Transactions on Multimedia
Multiscale detection of gesture patterns in continuous motion trajectories
GW'09 Proceedings of the 8th international conference on Gesture in Embodied Communication and Human-Computer Interaction
To move or to remove?: a human-centric approach to understanding gesture interpretation
Proceedings of the Designing Interactive Systems Conference
Hi-index | 0.00 |
In this paper I review current and past approaches towards the use of hand gesture recognition and comprehension in human-computer interaction. I point out properties of natural coverbal gestures in human communication and identify challenges for gesture comprehension systems in three areas. The first challenge is to derive the meaning of a gesture given that its semantics is defined in three semiotic dimensions that have to be addressed differently. A second challenge is the spatial composition of gestures in imagistic spaces. Finally, a third technical challenge is the development of an integrated processing model for speech and gesture.