Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
QuickSet: multimodal interaction for distributed applications
MULTIMEDIA '97 Proceedings of the fifth ACM international conference on Multimedia
Ten myths of multimodal interaction
Communications of the ACM
Machine Learning
Designing computer systems for older adults
The human-computer interaction handbook
A Vision-Based Microphone Switch for Speech Intent Detection
RATFG-RTS '01 Proceedings of the IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems (RATFG-RTS'01)
Noise adaptive stream weighting in audio-visual speech recognition
EURASIP Journal on Applied Signal Processing
Effects of multimodal feedback on the performance of older adults with normal and impaired vision
ERCIM'02 Proceedings of the User interfaces for all 7th international conference on Universal access: theoretical perspectives, practice, and experience
Toward a theory of organized multimodal integration patterns during human-computer interaction
Proceedings of the 5th international conference on Multimodal interfaces
When do we interact multimodally?: cognitive load and multimodal communication patterns
Proceedings of the 6th international conference on Multimodal interfaces
Private speech during multimodal human-computer interaction
Proceedings of the 6th international conference on Multimodal interfaces
A study of web usability for older adults seeking online health resources
ACM Transactions on Computer-Human Interaction (TOCHI)
Individual differences in multimodal integration patterns: what are they and why do they exist?
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Audio-visual cues distinguishing self- from system-directed speech in younger and older adults
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Co-Adaptation of audio-visual speech and gesture classifiers
Proceedings of the 8th international conference on Multimodal interfaces
HCI Beyond the GUI: Design for Haptic, Speech, Olfactory, and Other Nontraditional Interfaces
HCI Beyond the GUI: Design for Haptic, Speech, Olfactory, and Other Nontraditional Interfaces
Design lessons for older adult personal health records software from older adults
UAHCI'11 Proceedings of the 6th international conference on Universal access in human-computer interaction: users diversity - Volume Part II
Toward adaptive information fusion in multimodal systems
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
Just do what i tell you: the limited impact of instructions on multimodal integration patterns
UM'05 Proceedings of the 10th international conference on User Modeling
Combining user modeling and machine learning to predict users' multimodal integration patterns
MLMI'06 Proceedings of the Third international conference on Machine Learning for Multimodal Interaction
Multiple notification modalities and older users
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Review Article: Multimodal interaction: A review
Pattern Recognition Letters
Latent Semantic Analysis for Multimodal User Input With Speech and Gestures
IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP)
Hi-index | 0.00 |
Multimodal interfaces are designed with a focus on flexibility, although very few currently are capable of adapting to major sources of user, task, or environmental variation. The development of adaptive multimodal processing techniques will require empirical guidance from quantitative modeling on key aspects of individual differences, especially as users engage in different types of tasks in different usage contexts. In the present study, data were collected from fifteen 66- to 86-year-old healthy seniors as they interacted with a map-based flood management system using multimodal speech and pen input. A comprehensive analysis of multimodal integration patterns revealed that seniors were classifiable as either simultaneous or sequential integrators, like children and adults. Seniors also demonstrated early predictability and a high degree of consistency in their dominant integration pattern. However, greater individual differences in multimodal integration generally were evident in this population. Perhaps surprisingly, during sequential constructions seniors' intermodal lags were no longer in average and maximum duration than those of younger adults, although both of these groups had longer maximum lags than children. However, an analysis of seniors' performance did reveal lengthy latencies before initiating a task, and high rates of self talk and task-critical errors while completing spatial tasks. All of these behaviors were magnified as the task difficulty level increased. Results of this research have implications for the design of adaptive processing strategies appropriate for seniors' applications, especially for the development of temporal thresholds used during multimodal fusion. The long-term goal of this research is the design of high-performance multimodal systems that adapt to a full spectrum of diverse users, supporting tailored and robust future systems.