MacVisSTA: a system for multimodal analysis
Proceedings of the 6th international conference on Multimodal interfaces
Automatic Analysis of Multimodal Group Actions in Meetings
IEEE Transactions on Pattern Analysis and Machine Intelligence
Structural event detection for rich transcription of speech
Structural event detection for rich transcription of speech
Melodic cues to turn-taking in English: evidence from perception
SIGDIAL '01 Proceedings of the Second SIGdial Workshop on Discourse and Dialogue - Volume 16
VACE multimodal meeting corpus
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
Incorporating gesture and gaze into multimodal models of human-to-human communication
NAACL-DocConsortium '06 Proceedings of the 2006 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology: companion volume: doctoral consortium
Proceedings of the 9th international conference on Multimodal interfaces
Visual Focus of Attention in Dynamic Meeting Scenarios
MLMI '08 Proceedings of the 5th international workshop on Machine Learning for Multimodal Interaction
As go the feet...: on the estimation of attentional focus from stance
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
Player talk—the functions of communication in multplayer role-playing games
Computers in Entertainment (CIE) - SPECIAL ISSUE: Media Arts (Part II)
Game format effects on communication in multi-player games
Future Play '08 Proceedings of the 2008 Conference on Future Play: Research, Play, Share
Automatic nonverbal analysis of social interaction in small groups: A review
Image and Vision Computing
Multimodal floor control shift detection
Proceedings of the 2009 international conference on Multimodal interfaces
Modeling dominance in group conversations using nonverbal activity cues
IEEE Transactions on Audio, Speech, and Language Processing - Special issue on multimodal processing in speech-based interactions
The organisation of floor in meetings and the relation with speaker addressee patterns
Proceedings of the 2nd international workshop on Social signal processing
Computational Linguistics
Toward multimodal situated analysis
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Structural and temporal inference search (STIS): pattern identification in multimodal data
Proceedings of the 14th ACM international conference on Multimodal interaction
Interactive data-driven discovery of temporal behavior models from events in media streams
Proceedings of the 20th ACM international conference on Multimedia
Context-based conversational hand gesture classification in narrative interaction
Proceedings of the 15th ACM on International conference on multimodal interaction
Hi-index | 0.00 |
The participant in a human-to-human communication who controls the floor bears the burden of moving the communication process along. Change in control of the floor can happen through a number of mechanisms, including interruptions, delegation of the floor, and so on. This paper investigates floor control in multiparty meetings that are both audio and video taped; hence, we are able to analyze patterns not only of speech (e.g., discourse markers) but also of visual cues (e.g, eye gaze exchanges) that are commonly involved in floor control changes. Identifying who has control of the floor provides an important focus for information retrieval and summarization of meetings. Additionally, without understanding who has control of the floor, it is impossible to identify important events such as challenges for the floor. In this paper, we analyze multimodal cues related to floor control in two different meetings involving five participants each.