A State-Based Approach to the Representation and Recognition of Gesture
IEEE Transactions on Pattern Analysis and Machine Intelligence
An HMM-Based Threshold Model Approach for Gesture Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hidden Markov models for modeling and recognizing gesture under variation
Hidden Markov models
Prosody in Speech Understanding Systems
Prosody in Speech Understanding Systems
Designing a human-centered, multimodal GIS interface to support emergency management
Proceedings of the 10th ACM international symposium on Advances in geographic information systems
Visual Speech: A Physiological or Behavioural Biometric?
AVBPA '01 Proceedings of the Third International Conference on Audio- and Video-Based Biometric Person Authentication
Gesture Recognition of the Upper Limbs - From Signal to Symbol
Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction
Velocity Profile Based Recognition of Dynamic Gestures with Discrete Hidden Markov Models
Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction
Video-Based Sign Language Recognition Using Hidden Markov Models
Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction
High Performance Real-Time Gesture Recognition Using Hidden Markov Models
Proceedings of the International Gesture Workshop on Gesture and Sign Language in Human-Computer Interaction
Toward Natural Gesture/Speech Control of a Large Display
EHCI '01 Proceedings of the 8th IFIP International Conference on Engineering for Human-Computer Interaction
Reliable Tracking of Human Arm Dynamics by Multiple Cue Integration and Constraint Fusion
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Multimodal Interaction During Multiparty Dialogues: Initial Results
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Hand Gesture Symmetric Behavior Detection and Analysis in Natural Conversation
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
A Real-Time Framework for Natural Multimodal Interaction with Large Screen Displays
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality
Proceedings of the 5th international conference on Multimodal interfaces
Toward a theory of organized multimodal integration patterns during human-computer interaction
Proceedings of the 5th international conference on Multimodal interfaces
Modeling prosodic differences for speaker and language recognition
Modeling prosodic differences for speaker and language recognition
Improving continuous gesture recognition with spoken prosody
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
Exploratory study of lexical patterns in multimodal cues
MMUI '05 Proceedings of the 2005 NICTA-HCSNet Multimodal User Interaction Workshop - Volume 57
Potential speech features for cognitive load measurement
OZCHI '07 Proceedings of the 19th Australasian conference on Computer-Human Interaction: Entertaining User Interfaces
Using language complexity to measure cognitive load for adaptive interaction design
Proceedings of the 15th international conference on Intelligent user interfaces
Exploiting speech-gesture correlation in multimodal interaction
HCI'07 Proceedings of the 12th international conference on Human-computer interaction: intelligent multimodal interaction environments
Towards automatic cognitive load measurement from speech analysis
HCI'07 Proceedings of the 12th international conference on Human-computer interaction: interaction design and usability
Integrating semantics into multimodal interaction patterns
MLMI'07 Proceedings of the 4th international conference on Machine learning for multimodal interaction
Gesture interaction in cooperation scenarios
CRIWG'09 Proceedings of the 15th international conference on Groupware: design, implementation, and use
Improving gestural communication in virtual characters
AMDO'12 Proceedings of the 7th international conference on Articulated Motion and Deformable Objects
Multimodal behavior and interaction as indicators of cognitive load
ACM Transactions on Interactive Intelligent Systems (TiiS) - Special issue on highlights of the decade in interactive intelligent systems
Context-based conversational hand gesture classification in narrative interaction
Proceedings of the 15th ACM on International conference on multimodal interaction
Gesture vs. gesticulation: a test protocol
HCI'13 Proceedings of the 15th international conference on Human-Computer Interaction: interaction modalities and techniques - Volume Part IV
Hi-index | 0.00 |
Although gesture recognition has been studied extensively, communicative, affective, and biometrical "utility" of natural gesticulation remains relatively unexplored. One of the main reasons for that is the modeling complexity of spontaneous gestures. While lexical information in speech provides additional cues for disambiguating gestures, it does not cover rich paralinguistic domain. This paper offers initial findings from a large corpus of natural monologues about prosodic structuring between frequent beat-like strokes and concurrent speech. Using a set of audio-visual features in an HMM-based formulation, we are able to improve the discrimination between visually similar gestures. Those types of articulatory strokes represent different communicative functions. The analysis is based on the temporal alignment of detected vocal perturbations and the concurrent hand movement. As a supplementary result, we show that recognized articulatory strokes may be used for quantifying gesturing behavior.