Embodied agents for multi-party dialogue in immersive virtual worlds
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2
Scrutinizing Natural Scenes: Controlling the Gaze of an Embodied Conversational Agent
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Footing in human-robot conversations: how robots might shape participant roles using gaze cues
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
A finite-state turn-taking model for spoken dialog systems
NAACL '09 Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Dialog in the open world: platform and applications
Proceedings of the 2009 international conference on Multimodal interfaces
Models for multiparty engagement in open-world dialog
SIGDIAL '09 Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Identifying utterances addressed to an agent in multiparty human-agent conversations
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
The Mona Lisa gaze effect as an objective metric for perceived cospatiality
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Decisions about turns in multiparty conversation: from perception to action
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
Taming Mona Lisa: Communicating gaze faithfully in 2D and 3D facial projections
ACM Transactions on Interactive Intelligent Systems (TiiS)
Multiparty turn taking in situated dialog: study, lessons, and directions
SIGDIAL '11 Proceedings of the SIGDIAL 2011 Conference
Proceedings of the Designing Interactive Systems Conference
IrisTK: a statechart-based toolkit for multi-party face-to-face interaction
Proceedings of the 14th ACM international conference on Multimodal interaction
Investigating the midline effect for visual focus of attention recognition
Proceedings of the 14th ACM international conference on Multimodal interaction
Timing multimodal turn-taking for human-robot cooperation
Proceedings of the 14th ACM international conference on Multimodal interaction
A multi-modal approach for natural human-robot interaction
ICSR'12 Proceedings of the 4th international conference on Social Robotics
Development of a taxonomy to improve human-robot-interaction through multimodal robot feedback
CHI '13 Extended Abstracts on Human Factors in Computing Systems
Designing engagement-aware agents for multiparty conversations
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Proceedings of the 15th ACM on International conference on multimodal interaction
A dominance estimation mechanism using eye-gaze and turn-taking information
Proceedings of the 6th workshop on Eye gaze in intelligent human machine interaction: gaze in multimodal interaction
Proceedings of the 6th workshop on Eye gaze in intelligent human machine interaction: gaze in multimodal interaction
Review Article: Multimodal interaction: A review
Pattern Recognition Letters
Hi-index | 0.00 |
We study how synchronized gaze, gesture and speech rendered by an embodied conversational agent can influence the flow of conversations in multiparty settings. We begin by reviewing a computational framework for turn-taking that provides the foundation for tracking and communicating intentions to hold, release, or take control of the conversational floor. We then present implementation aspects of this model in an embodied conversational agent. Empirical results with this model in a shared task setting indicate that the various verbal and non-verbal cues used by the avatar can effectively shape the multiparty conversational dynamics. In addition, we identify and discuss several context variables which impact the turn allocation process.