Speech and gestures for graphic image manipulation
CHI '89 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Multimodal interfaces for dynamic interactive maps
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Ten myths of multimodal interaction
Communications of the ACM
Toward a theory of organized multimodal integration patterns during human-computer interaction
Proceedings of the 5th international conference on Multimodal interfaces
Lecture Notes in Computer Science
Fusion of children's speech and 2D gestures when conversing with 3D characters
Signal Processing - Special section: Multimodal human-computer interfaces
Evaluating Children's Interactive Products: Principles and Practices for Interaction Designers
Evaluating Children's Interactive Products: Principles and Practices for Interaction Designers
The nature of child computer interaction
BCS-HCI '11 Proceedings of the 25th BCS Conference on Human-Computer Interaction
Hi-index | 0.00 |
Few systems combine both Embodied Conversational Agents (ECAs) and multimodal input. This research aims at modeling the behavior of adults and children during their multimodal interaction with ECAs. A Wizard-of-Oz setup was used and users were video-recorded while interacting with 2D ECAs in a game scenario with speech and pen as input modes. We found that frequent social cues and natural Human-Human syntax condition the verbal interaction of both groups with ECAs. Multimodality accounted for 21% of inputs: it was used for integrating conversational and social aspects (by speech) into task-oriented actions (by pen). We closely examined temporal and semantic integration of modalities: most of the time, speech and gesture overlapped and produced complementary or redundant messages; children also tended to produce concurrent multimodal inputs, as a way of doing several things at the same time. Design implications of our results for multimodal bidirectional ECAs and game systems are discussed.