Embodied agents for multi-party dialogue in immersive virtual worlds
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2
Communicative humanoids: a computational model of psychosocial dialogue skills
Communicative humanoids: a computational model of psychosocial dialogue skills
A Granular Architecture for Dynamic Realtime Dialogue
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Teaching Computers to Conduct Spoken Interviews: Breaking the Realtime Barrier with Learning
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
Modeling multimodal communication as a complex system
ZiF'06 Proceedings of the Embodied communication in humans and machines, 2nd ZiF research group international conference on Modeling communication with robots and virtual humans
Evaluating multimodal human-robot interaction: a case study of an early humanoid prototype
Proceedings of the 7th International Conference on Methods and Techniques in Behavioral Research
Effect of time delays on agents' interaction dynamics
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
Managing chaos: models of turn-taking in character-multichild interactions
Proceedings of the 15th ACM on International conference on multimodal interaction
Hi-index | 0.00 |
Many dialogue systems have been built over the years that address some subset of the many complex factors that shape the behavior of participants in a face-to-face conversation. The Ymir Turntaking Model (YTTM) is a broad computational model of conversational skills that has been in development for over a decade, continuously growing in the number of factors it addresses. In past work we have shown how it addresses realtime dialogue, communicative gesture, perception of turntaking signals (e.g. prosody, gaze, manual gesture), dialogue planning, learning of multimodal turn signals, and dynamic adaptation to human speaking style. The architectural principles of the YTTM prescribe smaller architectural granularity than most other models, and its principles allow non-destructive additive expansion. In this paper we show how the YTTM accommodates multi-party dialogue. The extension has been implemented in a virtual environment; we present data for up to 12 simulated participants participating in realtime cooperative dialogue. The system includes dynamically adjustable parameters for impatience, willingness to give turn and eagerness to speak.