Mutual disambiguation of recognition errors in a multimodel architecture
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Multimodal system processing in mobile environments
UIST '00 Proceedings of the 13th annual ACM symposium on User interface software and technology
The connected user interface: realizing a personal situated navigation service
Proceedings of the 9th international conference on Intelligent user interfaces
Unification-based multimodal integration
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
ICASSP '96 Proceedings of the Acoustics, Speech, and Signal Processing, 1996. on Conference Proceedings., 1996 IEEE International Conference - Volume 06
Modality preferences in mobile and instrumented environments
Proceedings of the 11th international conference on Intelligent user interfaces
Explorative studies on multimodal interaction in a PDA- and desktop-based scenario
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
A multimodal pervasive framework for ambient assisted living
Proceedings of the 2nd International Conference on PErvasive Technologies Related to Assistive Environments
Integrating intra and extra gestures into a mobile and multimodal shopping assistant
PERVASIVE'05 Proceedings of the Third international conference on Pervasive Computing
Hi-index | 0.00 |
This paper evaluates the performance of a multimodal interface under exerted conditions in a natural field setting. The subjects in the present study engaged in a strenuous activity while multimodally performing map-based tasks using handheld computing devices. This activity made the users breathe heavily and become fatigued during the course of the study. We found that the performance of both speech and gesture recognizers degraded as a function of exertion, while the overall multimodal success rate was stable. This stabilization is accounted for by the mutual disambiguation of modalities, which increases significantly with exertion. The system performed better for subjects with a greater level of physical fitness, as measured by their running speed, with more stable multimodal performance and a later degradation of speech and gesture recognition as compared with subjects who were less fit. The findings presented in this paper have a significant impact on design decisions for multimodal interfaces targeted towards highly mobile and exerted users in field environments.