QuickSet: multimodal interaction for distributed applications
MULTIMEDIA '97 Proceedings of the fifth ACM international conference on Multimedia
Evaluation of eye gaze interaction
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Overriding errors in a speech and gaze multimodal architecture
Proceedings of the 9th international conference on Intelligent user interfaces
The information-theoretic analysis of unimodal interfaces and their multimodal counterparts
Proceedings of the 7th international ACM SIGACCESS conference on Computers and accessibility
Speech-augmented eye gaze interaction with small closely spaced targets
Proceedings of the 2006 symposium on Eye tracking research & applications
An Off-Screen Model for Tactile Graphical User Interfaces
ICCHP '08 Proceedings of the 11th international conference on Computers Helping People with Special Needs
Proceedings of the 3rd ACM International Workshop on Context-Awareness for Self-Managing Systems
Proceedings of the 4th ACM International Workshop on Context-Awareness for Self-Managing Systems
Speak up your mind: using speech to capture innovative ideas on interactive surfaces
Proceedings of the 10th Brazilian Symposium on on Human Factors in Computing Systems and the 5th Latin American Conference on Human-Computer Interaction
Move it there, or not?: the design of voice commands for gaze with speech
Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction
Mutual disambiguation of eye gaze and speech for sight translation and reading
Proceedings of the 6th workshop on Eye gaze in intelligent human machine interaction: gaze in multimodal interaction
Hi-index | 0.00 |
Nowadays various are the situations in which people need to interact with a Personal Computer without having the possibility to use traditional pointing devices, such as a keyboard or a mouse. In the latest years, various alternatives to the classical input devices like keyboard and mouse and novel interaction paradigms have been proposed. Particularly, multimodal interactions have been proposed to overcome the limit of each input channel take alone. In this paper we propose a multimodal system based on the integration of speech- and gaze-based inputs for interaction with a real desktop environment. A real-time grammar is generated to limit the vocal vocabulary basing on the fixated area. A disambiguation method is used for inherently ambiguous vocal commands, and the performed tests show its efficiency.