Integration and synchronization of input modes during multimodal human-computer interaction
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
QuickSet: multimodal interaction for distributed applications
MULTIMEDIA '97 Proceedings of the fifth ACM international conference on Multimedia
Single display groupware: a model for co-present collaboration
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
A unification-based parser for relational grammar
ACL '93 Proceedings of the 31st annual meeting on Association for Computational Linguistics
Multimodal interactive maps: designing for human performance
Human-Computer Interaction
Exploiting prosodic structuring of coverbal gesticulation
Proceedings of the 6th international conference on Multimodal interfaces
Proceedings of the 6th international conference on Multimodal interfaces
Proceedings of the working conference on Advanced visual interfaces
Proceedings of the 8th international conference on Multimodal interfaces
How pairs interact over a multimodal digital table
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Multimodal multiplayer tabletop gaming
Computers in Entertainment (CIE) - Interactive TV
An analytical framework for the evaluation of collaborative design around an interactive tabletop
Proceedings of the 11th International Conference of the NZ Chapter of the ACM Special Interest Group on Human-Computer Interaction
Speak up your mind: using speech to capture innovative ideas on interactive surfaces
Proceedings of the 10th Brazilian Symposium on on Human Factors in Computing Systems and the 5th Latin American Conference on Human-Computer Interaction
Hi-index | 0.00 |
Groups of people involved in collaboration on a task often incorporate the objects in their mutual environment into their discussion. With this comes physical reference to these 3-D objects, including: gesture, gaze, haptics, and possibly other modalities, over and above the speech we commonly associate with human-human communication. From a technological perspective, this human style of communication not only poses the challenge for researchers to create multimodal systems capable of integrating input from various modalities, but also to do it well enough that it supports 驴 but does not interfere with 驴 the primary goal of the collaborators, which is their own human-human interaction. This paper offers a first steptowards building such multimodal systems for supporting face-to-face collaborative work by providing both qualitative and quantitative analyses of multiparty multimodal dialogues in a field setting.