Specifying gestures by example
Proceedings of the 18th annual conference on Computer graphics and interactive techniques
MedSpeak: report creation with continuous speech recognition
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
ACM Transactions on Computer-Human Interaction (TOCHI)
Composing letters with a simulated listening typewriter
Communications of the ACM
NPen/sup ++/: a writer independent, large vocabulary on-line cursive handwriting recognition system
ICDAR '95 Proceedings of the Third International Conference on Document Analysis and Recognition (Volume 1) - Volume 1
Providing integrated toolkit-level support for ambiguity in recognition-based interfaces
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Interaction techniques for ambiguity resolution in recognition-based interfaces
UIST '00 Proceedings of the 13th annual ACM symposium on User interface software and technology
Multimodal error correction for speech user interfaces
ACM Transactions on Computer-Human Interaction (TOCHI)
Providing integrated toolkit-level support for ambiguity in recognition-based interfaces
CHI '00 Extended Abstracts on Human Factors in Computing Systems
An Experimental Study of Input Modes for Multimodal Human-Computer Interaction
ICMI '00 Proceedings of the Third International Conference on Advances in Multimodal Interfaces
Partial Information in Multimodal Dialogue
ICMI '00 Proceedings of the Third International Conference on Advances in Multimodal Interfaces
Context-Sensitive Help for Multimodal Dialogue
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
The efficiency of multimodal interaction for a map-based task
ANLC '00 Proceedings of the sixth conference on Applied natural language processing
Impromptu: managing networked audio applications for mobile users
Proceedings of the 2nd international conference on Mobile systems, applications, and services
Instant messaging comprehension with non-keyboard composition
CHI '06 Extended Abstracts on Human Factors in Computing Systems
Interaction techniques for ambiguity resolution in recognition-based interfaces
ACM SIGGRAPH 2006 Courses
Crossmodal error correction of continuous handwriting recognition by speech
Proceedings of the 12th international conference on Intelligent user interfaces
Corrective feedback and persistent learning for information extraction
Artificial Intelligence
Signal Processing - Special section: Multimodal human-computer interfaces
Interaction techniques for ambiguity resolution in recognition-based interfaces
ACM SIGGRAPH 2007 courses
HCI Beyond the GUI: Design for Haptic, Speech, Olfactory, and Other Nontraditional Interfaces
HCI Beyond the GUI: Design for Haptic, Speech, Olfactory, and Other Nontraditional Interfaces
Handling uncertainty in multimodal pervasive computing applications
Computer Communications
Understanding users' perception of speech recognition errors in mobile communication
International Journal of Mobile Learning and Organisation
Multimodal interaction: a new focal area for AI
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Corrective feedback and persistent learning for information extraction
Artificial Intelligence
A model of shortcut usage in multimodal human- computer interaction
ICDHM'11 Proceedings of the Third international conference on Digital human modeling
Captioning for deaf and hard of hearing people by editing automatic speech recognition in real time
ICCHP'06 Proceedings of the 10th international conference on Computers Helping People with Special Needs
Hi-index | 0.01 |
Our research addresses the problem of error correction in speechuser interfaces. Previous work hypothesized that switching modalitycould speed up interactive correction of recognition errors(so-called multimodal error correction). We present a user studythat compares, on a dictation task, multimodal error correctionwith conventional interactive correction, such as speaking again,choosing Tom a list, and keyboard input. Results show thatmultimodal correction is faster than conventional correctionwithout keyboard input, but slower than correction by typing forusers with good typing skills. Furthermore, while users initiallyprefer speech, they learn to avoid ineffective correctionmodalities with experience. To extrapolate results from this userstudy we developed a performance model of multimodal interactionthat predicts input speed including time needed for errorcorrection. We apply the model to estimate the impact ofrecognition technology improvements on correction speeds and theinfluence of recognition accuracy and correction method on theproductivity of dictation systems. Our model is a first steptowards formalizing multimodal (recognition-based) interaction.