Integrating simultaneous input from speech, gaze, and hand gestures
Intelligent multimedia interfaces
Eye tracking in advanced interface design
Virtual environments and advanced interface design
Natural language with integrated deictic and graphic gestures
Readings in intelligent user interfaces
A robust selection system using real-time multi-modal user-agent interactions
IUI '99 Proceedings of the 4th international conference on Intelligent user interfaces
Mutual disambiguation of recognition errors in a multimodel architecture
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Using eye movements to determine referents in a spoken dialogue system
Proceedings of the 2001 workshop on Perceptive user interfaces
Conversing with the user based on eye-gaze patterns
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Integrated speech and gaze control for realistic desktop environments
Proceedings of the 2008 symposium on Eye tracking research & applications
Context-based word acquisition for situated dialogue in a virtual world
Journal of Artificial Intelligence Research
RealTourist: a study of augmenting human-human and human-computer dialogue with eye-gaze overlay
INTERACT'05 Proceedings of the 2005 IFIP TC13 international conference on Human-Computer Interaction
Mutual disambiguation of eye gaze and speech for sight translation and reading
Proceedings of the 6th workshop on Eye gaze in intelligent human machine interaction: gaze in multimodal interaction
Hi-index | 0.00 |
This work explores how to use the gaze and the speech command simultaneously to select an object on the screen. Multimodal systems have long been a key mean to reduce the recognition errors of individual components. But the multimodal system generates errors as well. This present study tries to classify the multimodal errors, analyze the reasons causing these errors, and propose the solutions for eliminating them. The goal of this study is to gain insight into multimodal integration errors, and to develop an error self-recoverable multimodal architecture so as to make the error-prone recognition technologies perform at a more stable and robust level within multimodal architecture.