Dialogue control in social interface agents
CHI '93 INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Communicative humanoids: a computational model of psychosocial dialogue skills
Communicative humanoids: a computational model of psychosocial dialogue skills
Fluid Semantic Back-Channel Feedback in Dialogue: Challenges and Progress
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Optimizing endpointing thresholds using dialogue features in a spoken dialogue system
SIGdial '08 Proceedings of the 9th SIGdial Workshop on Discourse and Dialogue
Modeling multimodal communication as a complex system
ZiF'06 Proceedings of the Embodied communication in humans and machines, 2nd ZiF research group international conference on Modeling communication with robots and virtual humans
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
A Granular Architecture for Dynamic Realtime Dialogue
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Teaching Computers to Conduct Spoken Interviews: Breaking the Realtime Barrier with Learning
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
Turn Management or Impression Management?
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
Importance-Driven Turn-Bidding for spoken dialogue systems
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Learning backchannel prediction model from parasocial consensus sampling: a subjective evaluation
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
How turn-taking strategies influence users' impressions of an agent
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Effect of time delays on agents' interaction dynamics
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
A multimodal end-of-turn prediction model: learning from parasocial consensus sampling
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Decisions about turns in multiparty conversation: from perception to action
ICMI '11 Proceedings of the 13th international conference on multimodal interfaces
How agents' turn-taking strategies influence impressions and response behaviors
Presence: Teleoperators and Virtual Environments
Towards building a virtual counselor: modeling nonverbal behavior during intimate self-disclosure
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Live generation of interactive non-verbal behaviours
Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
A temporal simulator for developing turn-taking methods for spoken dialogue systems
SIGDIAL '12 Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Avatar and Dialog Turn-Yielding Phenomena
International Journal of Technology and Human Interaction
Hi-index | 0.00 |
Giving synthetic agents human-like realtime turntaking skills is a challenging task. Attempts have been made to manually construct such skills, with systematic categorization of silences, prosody and other candidate turn-giving signals, and to use analysis of corpora to produce static decision trees for this purpose. However, for general-purpose turntaking skills which vary between individuals and cultures, a system that can learn them on-the-job would be best. We are exploring ways to use machine learning to have an agent learn proper turntaking during interaction. We have implemented a talking agent that continuously adjusts its turntaking behavior to its interlocutors based on realtime analysis of the other party's prosody. Initial results from experiments on collaborative, content-free dialogue show that, for a given subset of turn-taking conditions, our modular reinforcement learning techniques allow the system to learn to take turns in an efficient, human-like manner.