Dialogue control in social interface agents
CHI '93 INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems
Communicative humanoids: a computational model of psychosocial dialogue skills
Communicative humanoids: a computational model of psychosocial dialogue skills
A Granular Architecture for Dynamic Realtime Dialogue
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Learning Smooth, Human-Like Turntaking in Realtime Dialogue
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Predicting Listener Backchannels: A Probabilistic Multimodal Approach
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
A multiparty multimodal architecture for realtime turntaking
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
How turn-taking strategies influence users' impressions of an agent
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Evaluating multimodal human-robot interaction: a case study of an early humanoid prototype
Proceedings of the 7th International Conference on Methods and Techniques in Behavioral Research
Where to sit? the study and implementation of seat selection in public places
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
How agents' turn-taking strategies influence impressions and response behaviors
Presence: Teleoperators and Virtual Environments
Avatar and Dialog Turn-Yielding Phenomena
International Journal of Technology and Human Interaction
Proceedings of the 6th workshop on Eye gaze in intelligent human machine interaction: gaze in multimodal interaction
Hi-index | 0.00 |
Several challenges remain in the effort to build software capable of conducting realtime dialogue with people. Part of the problem has been a lack of realtime flexibility, especially with regards to turntaking. We have built a system that can adapt its turntaking behavior in natural dialogue, learning to minimize unwanted interruptions and "awkward silences". The system learns this dynamically during the interaction in less than 30 turns, without special training sessions. Here we describe the system and its performance when interacting with people in the role of an interviewer. A prior evaluation of the system included 10 interactions with a single artificial agent (a non-learning version of itself); the new data consists of 10 interaction sessions with 10 different humans. Results show performance to be close to a human's in natural, polite dialogue, with 20% of the turn transitions taking place in under 300 msecs and 60% under 500 msecs. The system works in real-world settings, achieving robust learning in spite of noisy data. The modularity of the architecture gives it significant potential for extensions beyond the interview scenario described here.