Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
A Fast and Accurate Face Detector Based on Neural Networks
IEEE Transactions on Pattern Analysis and Machine Intelligence
Detection and Estimation of Pointing Gestures in Dense Disparity Maps
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
3-D Articulated Pose Tracking for Untethered Diectic Reference
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
A Tracking Framework for Collaborative Human Computer Interaction
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
A Real-Time Framework for Natural Multimodal Interaction with Large Screen Displays
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Tracking and Recognizing Two-Person Interactions in Outdoor Image Sequences
WOMOT '01 Proceedings of the IEEE Workshop on Multi-Object Tracking (WOMOT'01)
Tracking Multiple People with a Multi-Camera System
WOMOT '01 Proceedings of the IEEE Workshop on Multi-Object Tracking (WOMOT'01)
Arm-Pointing Gesture Interface Using Surrounded Stereo Cameras System
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 4 - Volume 04
3D-tracking of head and hands for pointing gesture recognition in a human-robot interaction scenario
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
Real-time pointing gesture recognition for an immersive environment
FGR' 04 Proceedings of the Sixth IEEE international conference on Automatic face and gesture recognition
From a wizard of Oz experiment to a real time speech and gesture multimodal interface
Signal Processing - Special section: Multimodal human-computer interfaces
Robust Tracking for Processing of Videos of Communication's Gestures
Gesture-Based Human-Computer Interaction and Simulation
Evaluation of contactless multimodal pointing devices
IASTED-HCI '07 Proceedings of the Second IASTED International Conference on Human Computer Interaction
Hi-index | 0.00 |
Although large displays could allow several users to work together and to move freely in a room, their associated interfaces are limited to contact devices that must generally be shared. This paper describes a novel interface called SHIVA (Several-Humans Interface with Vision and Audio) allowing several users to interact remotely with a very large display using both speech and gesture. The head and both hands of two users are tracked in real time by a stereo vision based system. From the body parts position, the direction pointed by each user is computed and selection gestures done with the second hand are recognized. Pointing gesture is fused with n-best results from speech recognition taking into account the application context. The system is tested on a chess game with two users playing on a very large display.