Optimization of string length for spoken digit input with error correction
International Journal of Man-Machine Studies
Speech and gestures for graphic image manipulation
CHI '89 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Feedback strategies for error correction in speech recognition systems
International Journal of Man-Machine Studies
Modeling error recovery and repair in automatic speech recognition
International Journal of Man-Machine Studies
Storywriter: a speech oriented editor
CHI '94 Conference Companion on Human Factors in Computing Systems
SUITEKeys: a speech understanding interface for the motor-control challenged
Assets '98 Proceedings of the third international ACM conference on Assistive technologies
Speech recognition, children, and reading
CHI 98 Cconference Summary on Human Factors in Computing Systems
Predicting hyperarticulate speech during human-computer error resolution
Speech Communication
Patterns of entry and correction in large vocabulary continuous speech recognition systems
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Speech recognition and manner of speaking in noise and in quiet
CHI '85 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Taming recognition errors with a multimodal interface
Communications of the ACM
A comparison of voice controlled and mouse controlled web browsing
Assets '00 Proceedings of the fourth international ACM conference on Assistive technologies
Multimodal error correction for speech user interfaces
ACM Transactions on Computer-Human Interaction (TOCHI)
Conversational interface technologies
The human-computer interaction handbook
Multimodal interactive maps: designing for human performance
Human-Computer Interaction
Hands-free, speech-based navigation during dictation: difficulties, consequences, and solutions
Human-Computer Interaction
Voice enabling mobile financial services with multimodal transformation
International Journal of Mobile Communications
Hi-index | 0.02 |
Despite a reported recognition accuracy rate of 98%, speech recognition technologies have yet to be widely adopted by computer users. When considering hands-free use of speech-based solutions, as is the case for individuals with physical impairments that interfere with the use of traditional solutions such as a mouse, the considerable time required to complete basic navigation tasks presents a significant barrier to adoption. Several solutions were proposed to improve navigation efficiency based on the results of a previous study. In the current study, a longitudinal experiment was conducted to investigate the process by which users learn to use hands-free speech-based navigation in the context of large vocabulary, continuous dictation tasks as well the efficacy of the proposed solutions. Due to the influence initial interactions have on the adoption of speech-based solutions, the current study focused on these critical, initial, interactions of individuals with no prior experience using speech-based dictation solutions. Our results confirm the efficacy of the solutions proposed earlier while providing valuable insights into the strategies users employ when using speech-based navigation commands as well as design decisions that can influence these patterns.