A shallow model of backchannel continuers in spoken dialogue
EACL '03 Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics - Volume 1
Creating Rapport with Virtual Agents
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
A probabilistic multimodal approach for predicting listener backchannels
Autonomous Agents and Multi-Agent Systems
Parasocial consensus sampling: combining multiple perspectives to learn virtual human behavior
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Appropriate and inappropriate timing of listener responses from multiple perspectives
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
When do we smile? analysis and modeling of the nonverbal context of listener smiles in conversation
ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction - Volume Part I
Integrating backchannel prediction models into embodied conversational agents
IVA'12 Proceedings of the 12th international conference on Intelligent Virtual Agents
Avatar and Dialog Turn-Yielding Phenomena
International Journal of Technology and Human Interaction
Speaker-adaptive multimodal prediction model for listener responses
Proceedings of the 15th ACM on International conference on multimodal interaction
Hi-index | 0.00 |
Computational models that attempt to predict when a virtual human should backchannel are often based on the analysis of recordings of face-to-face conversations between humans. Building a model based on a corpus brings with it the problem that people differ in the way they behave. The data provides examples of responses of a single person in a particular context but in the same context another person might not have provided a response. Vice versa, the corpus will contain contexts in which the particular listener recorded did not produce a backchannel response, where another person would have responded. Listeners can differ in the amount, the timing and the type of backchannels they provide to the speaker, because of individual differences - related to personality, gender, or culture, for instance. To gain more insight in this variation we have collected data in which we record the behaviors of three listeners interacting with one speaker. All listeners think they are having a one-on-one conversation with the speaker, while the speaker actually only sees one of the listeners. The context, in this case the speaker's actions, is for all three listeners the same and they respond to it individually. This way we have created data on cases in which different persons show similar behaviors and cases in which they behave differently. With the recordings of this data collection study we can start building our model of backchannel behavior for virtual humans that takes into account similarities and differences between persons.