Predicting Listener Backchannels: A Probabilistic Multimodal Approach
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
A Listening Agent Exhibiting Variable Behaviour
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
A Model of Personality and Emotional Traits
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
Towards a smiling ECA: studies on mimicry, timing and types of smiles
Proceedings of the 2nd international workshop on Social signal processing
Backchannel strategies for artificial listeners
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Multimodal backchannels for embodied conversational agents
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Towards influencing of the conversational agent mental state in the task of active listening
COST'09 Proceedings of the Second international conference on Development of Multimodal Interfaces: active Listening and Synchrony
Hi-index | 0.00 |
Embodied conversational agents should be able to provide feedback on what a human interlocutor is saying. We are compiling a list of facial feedback expressions that signal attention and interest, grounding and attitude. As expressions need to serve many functions at the same time and most of the component signals are ambiguous, it is important to get a better idea of the many to many mappings between displays and functions. We asked people to label several dynamic expressions as a probe into this semantic space. We compare simple signals and combined signals in order to find out whether a combination of signals can have a meaning on its own or not, i. e. the meaning of single signals is different from the meaning attached to the combination of these signals. Results show that in some cases a combination of signals alters the perceived meaning of the backchannel.