Speech versus mouse commands for word processing: an empirical evaluation
International Journal of Man-Machine Studies
A comparison of speech and mouse/keyboard GUI navigation
CHI '95 Conference Companion on Human Factors in Computing Systems
A comparison of voice controlled and mouse controlled web browsing
Assets '00 Proceedings of the fourth international ACM conference on Assistive technologies
Programming by voice, VocalProgramming
Assets '00 Proceedings of the fourth international ACM conference on Assistive technologies
Voice as sound: using non-verbal voice input for interactive control
Proceedings of the 14th annual ACM symposium on User interface software and technology
Proceedings of the fifth international ACM conference on Assistive technologies
Voice over Workplace (VoWP): voice navigation in a complex business GUI
Proceedings of the fifth international ACM conference on Assistive technologies
Proceedings of the Fourteenth International Florida Artificial Intelligence Research Society Conference
Speech-based cursor control: a study of grid-based solutions
Assets '04 Proceedings of the 6th international ACM SIGACCESS conference on Computers and accessibility
Proceedings of the 7th international ACM SIGACCESS conference on Computers and accessibility
The vocal joystick:: evaluation of voice-based cursor control techniques
Proceedings of the 8th international ACM SIGACCESS conference on Computers and accessibility
Conversation-and-Control
WiiMS: simulating mouse and keyboard for motor-impaired users
Proceedings of the South African Institute for Computer Scientists and Information Technologists Conference
Hi-index | 0.02 |
Controlling graphical user interfaces (GUI) by speech is slow, but proves useful for disabled persons with limitations in operating mouse and keyboard. We present conversation-and-control, a new approach for using speech as input modality for GUIs, which facilitates direct manipulation of widget functions by spoken commands. Our approach is based on a command language, which provides a unique command for each specific widget function. For managing the interaction we propose a mixed-initiative dialog model, which can be generated from widget properties. Using heuristics for inferring the meaning of a recognition result and having the ability to ask clarification questions, our approach avoids the rejection of recognition errors. We hypothesized that conversation-and-control allows for shorter task completion times than conventional command-and-control approaches, due to a reduction of the average number of required commands. The results of a user experiment, which we present and discuss, indicate a 16.8% reduction of task completion time achieved by our approach.