Discourse theory and interface design: the case of pointing with the mouse
International Journal of Man-Machine Studies
CHI '94 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A third modality of natural language?
Artificial Intelligence Review - Special issue on integration of natural language and vision processing: intelligent multimedia
Communications of the ACM
Multimodal interfaces for dynamic interactive maps
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Usability Engineering
Empirical studies of discourse representations for natural language interfaces
EACL '89 Proceedings of the fourth conference on European chapter of the Association for Computational Linguistics
Visual display, pointing, and natural language: the power of multimodal interaction
AVI '98 Proceedings of the working conference on Advanced visual interfaces
Hi-index | 0.00 |
This paper empirically investigates how humans use reference in space when interacting with a multimodal system able to understand written natural language and pointing with the mouse. We verified that user expertise plays an important role in the use of multimodal systems: experienced users performed 84% multimodal inputs while inexpert only 30%. Moreover experienced are able to efficiently use multimodality shortening the written input and transferring part of the reference meaning on the pointing. Results showed also the importance of the system layout: when very short labels (one character) are available users strongly adopt a redundant reference strategy, i.e. they referred to the object in a linguistic way and use pointing too. Starting from these facts some guidelines for future multimodal systems are suggested.