Integrating simultaneous input from speech, gaze, and hand gestures
Intelligent multimedia interfaces
Natural language with integrated deictic and graphic gestures
Readings in intelligent user interfaces
Mutual disambiguation of recognition errors in a multimodel architecture
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Resolving ambiguities of a gaze and speech interface
Proceedings of the 2004 symposium on Eye tracking research & applications
Using eye movements to determine referents in a spoken dialogue system
Proceedings of the 2001 workshop on Perceptive user interfaces
Hi-index | 0.00 |
Eyesight and speech are two channels that humans naturally use to communicate with each other. However both the eye tracking and the speech recognition technique existing are still far from perfect. This work explored how to integrate two (or more) error-prone sources of information on users' selection of objects in a visual interface. The implemented system integrated a commercial speech recognition system with gaze tracking in order to improve recognition results. In addition, we employed a new measure of the rate of mutual disambiguation for the multimodal system and conducted an experimental evaluation.