Parametric Hidden Markov Models for Gesture Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Embodied conversational interface agents
Communications of the ACM
BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Statistical Language Learning
Making Discours Visible: Coding and Animating Conversational Facial Displays
CA '02 Proceedings of the Computer Animation
A maximum-entropy-inspired parser
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
A shallow model of backchannel continuers in spoken dialogue
EACL '03 Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics - Volume 1
Natural behavior of a listening agent
Lecture Notes in Computer Science
AI Magazine - Special issue on achieving human-level AI through integrated systems and research
Multimodal generation in the COMIC dialogue system
ACLdemo '05 Proceedings of the ACL 2005 on Interactive poster and demonstration sessions
Automated generation of non-verbal behavior for virtual embodied characters
Proceedings of the 9th international conference on Multimodal interfaces
ERIC: a generic rule-based framework for an affective embodied commentary agent
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Trackside DEIRA: a dynamic engaging intelligent reporter agent
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
A spoken dialog system for chat-like conversations considering response timing
TSD'07 Proceedings of the 10th international conference on Text, speech and dialogue
Nonverbal behavior generator for embodied conversational agents
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Rigid Head Motion in Expressive Speech Animation: Analysis and Synthesis
IEEE Transactions on Audio, Speech, and Language Processing
A Virtual Head Driven by Music Expressivity
IEEE Transactions on Audio, Speech, and Language Processing
Augmenting Gesture Animation with Motion Capture Data to Provide Full-Body Engagement
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
Proceedings of the Workshop on Use of Context in Vision Processing
Evaluating models of speaker head nods for virtual agents
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Learning backchannel prediction model from parasocial consensus sampling: a subjective evaluation
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Modeling speaker behavior: a comparison of two approaches
IVA'12 Proceedings of the 12th international conference on Intelligent Virtual Agents
Hi-index | 0.00 |
During face-to-face conversation, the speaker's head is continually in motion. These movements serve a variety of important communicative functions. Our goal is to develop a model of the speaker's head movements that can be used to generate head movements for virtual agents based on a gesture annotation corpora. In this paper, we focus on the first step of the head movement generation process: predicting when the speaker should use head nods. We describe our machine-learning approach that creates a head nod model from annotated corpora of face-to-face human interaction, relying on the linguistic features of the surface text. We also describe the feature selection process, training process, and the evaluation of the learned model with test data in detail. The result shows that the model is able to predict head nods with high precision and recall.