Inductive learning algorithms and representations for text categorization
Proceedings of the seventh international conference on Information and knowledge management
Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Utterance Units in Spoken Dialogue
ECAI '96 Workshop on Dialogue Processing in Spoken Language Systems
Recognizing gaze aversion gestures in embodied conversational discourse
Proceedings of the 8th international conference on Multimodal interfaces
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Proceedings of the 13th international conference on Intelligent user interfaces
The Rickel Gaze Model: A Window on the Mind of a Virtual Human
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Estimating User's Conversational Engagement Based on Gaze Behaviors
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Culture-specific communication management for virtual agents
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Gaze and Gesture Activity in Communication
UAHCI '09 Proceedings of the 5th International on ConferenceUniversal Access in Human-Computer Interaction. Part II: Intelligent and Ubiquitous Interaction Environments
Incremental dialogue processing in a micro-domain
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
Explorations in engagement for humans and robots
Artificial Intelligence
Multimodal floor control shift detection
Proceedings of the 2009 international conference on Multimodal interfaces
Eye-gaze experiments for conversation monitoring
Proceedings of the 3rd International Universal Communication Symposium
The role of interactivity in human-machine conversation for automatic word acquisition
SIGDIAL '09 Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Constructive Dialogue Modelling: Speech Interaction and Rational Agents
Constructive Dialogue Modelling: Speech Interaction and Rational Agents
Importance-Driven Turn-Bidding for spoken dialogue systems
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
The vocal intensity of turn-initial cue phrases in dialogue
SIGDIAL '10 Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Proceedings of the 2010 workshop on Eye gaze in intelligent human machine interaction
Multimodal feedback in first encounter interactions
HCI'13 Proceedings of the 15th international conference on Human-Computer Interaction: interaction modalities and techniques - Volume Part IV
Hi-index | 0.00 |
Eye gaze is an important means for controlling interaction and coordinating the participants' turns smoothly. We have studied how eye gaze correlates with spoken interaction and especially focused on the combined effect of the speech signal and gazing to predict turn taking possibilities. It is well known that mutual gaze is important in the coordination of turn taking in two-party dialogs, and in this article, we investigate whether this fact also holds for three-party conversations. In group interactions, it may be that different features are used for managing turn taking than in two-party dialogs. We collected casual conversational data and used an eye tracker to systematically observe a participant's gaze in the interactions. By studying the combined effect of speech and gaze on turn taking, we aimed to answer our main questions: How well can eye gaze help in predicting turn taking? What is the role of eye gaze when the speaker holds the turn? Is the role of eye gaze as important in three-party dialogs as in two-party dialogue? We used Support Vector Machines (SVMs) to classify turn taking events with respect to speech and gaze features, so as to estimate how well the features signal a change of the speaker or a continuation of the same speaker. The results confirm the earlier hypothesis that eye gaze significantly helps in predicting the partner's turn taking activity, and we also get supporting evidence for our hypothesis that the speaker is a prominent coordinator of the interaction space. Such a turn taking model could be used in interactive applications to improve the system's conversational performance.