Dialogue control in social interface agents
CHI '93 INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems
Communicative humanoids: a computational model of psychosocial dialogue skills
Communicative humanoids: a computational model of psychosocial dialogue skills
Contextual recognition of head gestures
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Can virtual humans be more engaging than real ones?
HCI'07 Proceedings of the 12th international conference on Human-computer interaction: intelligent multimodal interaction environments
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Towards a common framework for multimodal generation: the behavior markup language
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
A domain-independent framework for modeling emotion
Cognitive Systems Research
Creating Rapport with Virtual Agents
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Learning Smooth, Human-Like Turntaking in Realtime Dialogue
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Predicting Listener Backchannels: A Probabilistic Multimodal Approach
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Mutually Coordinated Anticipatory Multimodal Interaction
Verbal and Nonverbal Features of Human-Human and Human-Machine Interaction
Backchannel strategies for artificial listeners
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Towards conversational agents that attend to and adapt to communicative user feedback
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Towards more comprehensive listening behavior: beyond the bobble head
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Cultural study on speech duration and perception of virtual agent's nodding
IVA'12 Proceedings of the 12th international conference on Intelligent Virtual Agents
Timing and entrainment of multimodal backchanneling behavior for an embodied conversational agent
Proceedings of the 15th ACM on International conference on multimodal interaction
Hi-index | 0.01 |
Participation in natural, real-time dialogue calls for behaviors supported by perception-action cycles from around 100 msec and up. Generating certain kinds of such behaviors, namely envelope feedback, has been possible since the early 90s. Real-time backchannel feedback related to the content of a dialogue has been more difficult to achieve. In this paper we describe our progress in allowing virtual humans to give rapid within-utterance content-specific feedback in real-time dialogue. We present results from human-subject studies of content feedback, where results show that content feedback to a particular phrase or word in human-human dialogue comes 560-2500 msec from the phrase's onset, 1 second on average. We also describe a system that produces such feedback with an autonomous agent in limited topic domains, present performance data of this agent in human-agent interactions experiments and discuss technical challenges in light of the observed human-subject data.