Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
A shallow model of backchannel continuers in spoken dialogue
EACL '03 Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics - Volume 1
Creating Rapport with Virtual Agents
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
A probabilistic multimodal approach for predicting listener backchannels
Autonomous Agents and Multi-Agent Systems
Modeling embodied feedback with virtual humans
ZiF'06 Proceedings of the Embodied communication in humans and machines, 2nd ZiF research group international conference on Modeling communication with robots and virtual humans
Parasocial consensus sampling: combining multiple perspectives to learn virtual human behavior
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Integrating backchannel prediction models into embodied conversational agents
IVA'12 Proceedings of the 12th international conference on Intelligent Virtual Agents
Speaker-adaptive multimodal prediction model for listener responses
Proceedings of the 15th ACM on International conference on multimodal interaction
Hi-index | 0.00 |
Traditionally listener response prediction models are learned from pre-recorded dyadic interactions. Because of individual differences in behavior, these recordings do not capture the complete ground truth. Where the recorded listener did not respond to an opportunity provided by the speaker, another listener would have responded or vice versa. In this paper, we introduce the concept of parallel listener consensus where the listener responses from multiple parallel interactions are combined to better capture differences and similarities between individuals. We show how parallel listener consensus can be used for both learning and evaluating probabilistic prediction models of listener responses. To improve the learning performance, the parallel consensus helps identifying better negative samples and reduces outliers in the positive samples. We propose a new error measurement called fConsensus which exploits the parallel consensus to better define the concepts of exactness (mislabels) and completeness (missed labels) for prediction models. We present a series of experiments using the MultiLis Corpus where three listeners were tricked into believing that they had a one-on-one conversation with a speaker, while in fact they were recorded in parallel in interaction with the same speaker. In this paper we show that using parallel listener consensus can improve learning performance and represent better evaluation criteria for predictive models.