The integrality of speech in multimodal interfaces
ACM Transactions on Computer-Human Interaction (TOCHI)
Designing and Evaluating an Adaptive Spoken Dialogue System
User Modeling and User-Adapted Interaction
Conversational speech interfaces
The human-computer interaction handbook
Guidelines for multimodal user interface design
Communications of the ACM - Multimodal interfaces that flex, adapt, and persist
Towards a tool for the Subjective Assessment of Speech System Interfaces (SASSI)
Natural Language Engineering
Towards developing general models of usability with PARADISE
Natural Language Engineering
A Study in Efficiency and Modality Usage in Multimodal Form Filling Systems
IEEE Transactions on Audio, Speech, and Language Processing
A simple index for multimodal flexibility
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A model of shortcut usage in multimodal human- computer interaction
ICDHM'11 Proceedings of the Third international conference on Digital human modeling
Hi-index | 0.00 |
In this paper, we propose two new objective metrics, relative modality efficiency and multimodal synergy, that can provide valuable information and identify usability problems during the evaluation of multimodal systems. Relative modality efficiency (when compared with modality usage) can identify suboptimal use of modalities due to poor interface design or information asymmetries. Multimodal synergy measures the added value from efficiently combining multiple input modalities, and can be used as a single measure of the quality of modality fusion and fission in a multimodal system. The proposed metrics are used to evaluate two multimodal systems that combine pen/speech and mouse/keyboard modalities respectively. The results provide much insight into multimodal interface usability issues, and demonstrate how multimodal systems should adapt to maximize modalities synergy resulting in efficient, natural, and intelligent multimodal interfaces.