Embodiment in conversational interfaces: Rea
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Communicative humanoids: a computational model of psychosocial dialogue skills
Communicative humanoids: a computational model of psychosocial dialogue skills
A shallow model of backchannel continuers in spoken dialogue
EACL '03 Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics - Volume 1
Synthesizing multimodal utterances for conversational agents: Research Articles
Computer Animation and Virtual Worlds
A conversational agent as museum guide: design and evaluation of a real-world application
Lecture Notes in Computer Science
Modeling embodied feedback with virtual humans
ZiF'06 Proceedings of the Embodied communication in humans and machines, 2nd ZiF research group international conference on Modeling communication with robots and virtual humans
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Predicting Listener Backchannels: A Probabilistic Multimodal Approach
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
A Listening Agent Exhibiting Variable Behaviour
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Influence of personality traits on backchannel selection
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Interaction strategies for an affective conversational agent
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Continuous interaction within the SAIBA framework
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Interaction strategies for an affective conversational agent
Presence: Teleoperators and Virtual Environments
Estimating a user’s internal state before the first input utterance
Advances in Human-Computer Interaction
Incremental dialogue understanding and feedback for multiparty, multimodal conversation
IVA'12 Proceedings of the 12th international conference on Intelligent Virtual Agents
Hi-index | 0.00 |
Just like humans, conversational computer systems should not listen silently to their input and then respond. Instead, they should enforce the speaker-listener link by attending actively and giving feedback on an utterance while perceiving it. Most existing systems produce direct feedback responses to decisive (e.g. prosodic) cues. We present a framework that conceives of feedback as a more complex system, resulting from the interplay of conventionalized responses to eliciting speaker events and the multimodal behavior that signals how internal states of the listener evolve. A model for producing such incremental feedback, based on multi-layered processes for perceiving, understanding, and evaluating input, is described.