Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Ten myths of multimodal interaction
Communications of the ACM
Toward a theory of organized multimodal integration patterns during human-computer interaction
Proceedings of the 5th international conference on Multimodal interfaces
Proceedings of the 5th international conference on Multimodal interfaces
Speech pen: predictive handwriting based on ambient multimodal recognition
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Toward adaptive information fusion in multimodal systems
MMUI '05 Proceedings of the 2005 NICTA-HCSNet Multimodal User Interaction Workshop - Volume 57
Multimodal human-computer interaction: A survey
Computer Vision and Image Understanding
Presentation sensei: a presentation training system using speech and image processing
Proceedings of the 9th international conference on Multimodal interfaces
HCI Beyond the GUI: Design for Haptic, Speech, Olfactory, and Other Nontraditional Interfaces
HCI Beyond the GUI: Design for Haptic, Speech, Olfactory, and Other Nontraditional Interfaces
An integrative recognition method for speech and gestures
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Theorizing mobility in community networks
International Journal of Human-Computer Studies
Mixed reality participants in smart meeting rooms and smart home environments
Personal and Ubiquitous Computing
Proceedings of the 2009 international conference on Multimodal interfaces
Multimodal interfaces: Challenges and perspectives
Journal of Ambient Intelligence and Smart Environments
Multi-modal text entry and selection on a mobile device
Proceedings of Graphics Interface 2010
Improving multimodal interaction design with the MMWA authoring environment
Proceedings of the 28th ACM International Conference on Design of Communication
Onomatopen: painting using onomatopoeia
ICEC'10 Proceedings of the 9th international conference on Entertainment computing
Temporal binding of multimodal controls for dynamic map displays: a systems approach
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Toward adaptive information fusion in multimodal systems
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
Just do what i tell you: the limited impact of instructions on multimodal integration patterns
UM'05 Proceedings of the 10th international conference on User Modeling
Combining user modeling and machine learning to predict users' multimodal integration patterns
MLMI'06 Proceedings of the Third international conference on Machine Learning for Multimodal Interaction
Multimodal interfaces: Challenges and perspectives
Journal of Ambient Intelligence and Smart Environments
Review Article: Multimodal interaction: A review
Pattern Recognition Letters
Hi-index | 0.01 |
Techniques for information fusion are at the heart of multimodal system design. To develop new user-adaptive approaches for multimodal fusion, the present research investigated the stability and underlying cause of major individual differences that have been documented between users in their multimodal integration pattern. Longitudinal data were collected from 25 adults as they interacted with a map system over six weeks. Analyses of 1,100 multimodal constructions revealed that everyone had a dominant integration pattern, either simultaneous or sequential, which was 95-96% consistent and remained stable over time. In addition, coherent behavioral and linguistic differences were identified between these two groups. Whereas performance speed was comparable, sequential integrators made only half as many errors and excelled during new or complex tasks. Sequential integrators also had more precise articulation (e.g., fewer disfluencies), although their speech rate was no slower. Finally, sequential integrators more often adopted terse and direct command-style language, with a smaller and less varied vocabulary, which appeared focused on achieving error-free communication. These distinct interaction patterns are interpreted as deriving from fundamental differences in reflective-impulsive cognitive style. Implications of these findings are discussed for the design of adaptive multimodal systems with substantially improved performance characteristics.