Speech and gestures for graphic image manipulation
CHI '89 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Applying the Wizard of Oz Technique to the Study of Multimodal Systems
EWHCI '93 Selected papers from the Third International Conference on Human-Computer Interaction
“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Toward a theory of organized multimodal integration patterns during human-computer interaction
Proceedings of the 5th international conference on Multimodal interfaces
Exploiting prosodic structuring of coverbal gesticulation
Proceedings of the 6th international conference on Multimodal interfaces
When do we interact multimodally?: cognitive load and multimodal communication patterns
Proceedings of the 6th international conference on Multimodal interfaces
Examining the redundancy of multimodal input
OZCHI '06 Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments
Combining user modeling and machine learning to predict users' multimodal integration patterns
MLMI'06 Proceedings of the Third international conference on Machine Learning for Multimodal Interaction
Multimodal input for meeting browsing and retrieval interfaces: preliminary findings
MLMI'06 Proceedings of the Third international conference on Machine Learning for Multimodal Interaction
UAHCI'11 Proceedings of the 6th international conference on Universal access in human-computer interaction: applications and services - Volume Part IV
Hi-index | 0.01 |
A user experiment on multimodal interaction (speech, hand position and hand shapes) to study two major relationships: between the level of cognitive load experienced by users and the resulting multimodal interaction patterns; and how the semantics of the information being conveyed affected those patterns. We found that as cognitive load increases, users' multimodal productions tend to become semantically more complementary and less redundant across modalities. This validates cognitive load theory as a theoretical background for understanding the occurrence of particular kinds of multimodal productions. Moreover, results indicate a significant relationship between the temporal multimodal integration pattern (7 patterns in this experiment) and the semantics of the command being issued by the user (4 types of commands), shedding new light on previous research findings that assign a unique temporal integration pattern to any given subject regardless of the communication taking place.