Embodiment in conversational interfaces: Rea
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Communicative humanoids: a computational model of psychosocial dialogue skills
Communicative humanoids: a computational model of psychosocial dialogue skills
Creating Rapport with Virtual Agents
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Searching for Prototypical Facial Feedback Signals
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Greta: an interactive expressive ECA system
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
A probabilistic multimodal approach for predicting listener backchannels
Autonomous Agents and Multi-Agent Systems
Modeling embodied feedback with virtual humans
ZiF'06 Proceedings of the Embodied communication in humans and machines, 2nd ZiF research group international conference on Modeling communication with robots and virtual humans
The SEMAINE API: towards a standards-based framework for building emotion-oriented systems
Advances in Human-Computer Interaction - Special issue on emotion-aware natural interaction
Towards conversational agents that attend to and adapt to communicative user feedback
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Towards more comprehensive listening behavior: beyond the bobble head
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Backchannels: quantity, type and timing matters
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Cultural study on speech duration and perception of virtual agent's nodding
IVA'12 Proceedings of the 12th international conference on Intelligent Virtual Agents
Hi-index | 0.00 |
One of the most desirable characteristics of an Embodied Conversational Agent (ECA) is the capability of interacting with users in a human-like manner. While listening to a user, an ECA should be able to provide backchannel signals through visual and acoustic modalities. In this work we propose an improvement of our previous system to generate multimodal backchannel signals on visual and acoustic modalities. A perceptual study has been performed to understand how context-free multimodal backchannels are interpreted by users.