Natural behavior of a listening agent
Lecture Notes in Computer Science
The Behavior Markup Language: Recent Developments and Challenges
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Creating Rapport with Virtual Agents
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
A probabilistic multimodal approach for predicting listener backchannels
Autonomous Agents and Multi-Agent Systems
Modeling embodied feedback with virtual humans
ZiF'06 Proceedings of the Embodied communication in humans and machines, 2nd ZiF research group international conference on Modeling communication with robots and virtual humans
Latent mixture of discriminative experts for multimodal prediction modeling
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics
Learning and evaluating response prediction models using parallel listener consensus
International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
The multiLis corpus - dealing with individual differences in nonverbal listening behavior
Proceedings of the Third COST 2102 international training school conference on Toward autonomous, adaptive, and context-aware multimodal interfaces: theoretical and practical issues
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Backchannels: quantity, type and timing matters
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Hi-index | 0.00 |
In this paper we will present our design for generating listening behavior for embodied conversational agents. It uses a corpus based prediction model to predict the timing of backchannels. The design of the system iterates on a previous design (Huang et al.[5]) on which we propose improvements in terms of robustness and personalization. For robustness we propose a variable threshold determined at run-time to regulate the amount of backchannels being produced by the system. For personalization we propose a character specification interface where the typical type of head nods to be displayed by the agent can be specified and ways to generate slight variations during runtime.