Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Toward a theory of organized multimodal integration patterns during human-computer interaction
Proceedings of the 5th international conference on Multimodal interfaces
When do we interact multimodally?: cognitive load and multimodal communication patterns
Proceedings of the 6th international conference on Multimodal interfaces
Multimodal fusion: a new hybrid strategy for dialogue systems
Proceedings of the 8th international conference on Multimodal interfaces
Multimodal interactive maps: designing for human performance
Human-Computer Interaction
Hi-index | 0.00 |
This is a large extension to a previous paper presented in LREC 2006 [6]. It describes the motivation, collection and format of the MIMUS corpus, as well as an in-depth and issue-focused analysis of the data. MIMUS [8] is the result of multimodal WoZ experiments conducted at the University of Seville as part of the TALK project. The main objective of the MIMUS corpus was to gather information about different users and their performance, preferences and usage of a multimodal multilingual natural dialogue system in the Smart Home scenario in Spanish. The focus group is composed by wheel-chair-bound users, because of their special motivation to use this kind of technology, along with their specific needs. Throughout this article, the WoZ platform, experiments, methodology, annotation schemes and tools, and all relevant data will be discussed, as well as the results of the in-depth analysis of these data. The corpus compresses a set of three related experiments. Due to the limited scope of this article, only some results related to the first two experiments (1A and 1B) will be discussed. This article will focus on subject's preferences, multimodal behavioural patterns and willingness to use this kind of technology.