Mutual disambiguation of recognition errors in a multimodel architecture
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Advances in Robust Multimodal Interface Design
IEEE Computer Graphics and Applications
Error recovery in a blended style eye gaze and speech interface
Proceedings of the 5th international conference on Multimodal interfaces
Exploring multimodality in the laboratory and the field
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Hi-index | 0.00 |
In this paper we present the results of a trial undertaken with a platform that supports multimodal applications for 3G cellular phones. A novel aspect of our work is multimodal interaction capable of supporting natural language speech and text data entry. The focus of our research is to understand user behavior in response to errors encountered whilst interacting with mobile applications that furnish a multimodal user interface. We observed an increase in the user's propensity to alternate between UI modes when encountering errors, as well as increased termination rates even though an alternative mode was available to complete tasks. Given the multiplicity of user interfaces, mutlimodal solutions provide a novel way to improve user response to errors. By analyzing how users respond to errors in multimodal applications we wish to develop a capability for error self-recovery in order to reduce the termination rate upon error. This paper presents the components of our platform and key results of our trial.