Embodied conversational interface agents
Communications of the ACM
BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Automated generation of non-verbal behavior for virtual embodied characters
Proceedings of the 9th international conference on Multimodal interfaces
ERIC: a generic rule-based framework for an affective embodied commentary agent
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
Trackside DEIRA: a dynamic engaging intelligent reporter agent
Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems - Volume 1
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Textual Affect Sensing for Sociable and Expressive Online Communication
ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
Learning a model of speaker head nods using gesture corpora
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Parasocial consensus sampling: combining multiple perspectives to learn virtual human behavior
Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: volume 1 - Volume 1
Towards a common framework for multimodal generation: the behavior markup language
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Nonverbal behavior generator for embodied conversational agents
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Rigid Head Motion in Expressive Speech Animation: Analysis and Synthesis
IEEE Transactions on Audio, Speech, and Language Processing
Building a character animation system
MIG'11 Proceedings of the 4th international conference on Motion in Games
Hi-index | 0.00 |
Virtual human research has often modeled nonverbal behaviors based on the findings of psychological research. In recent years, however, there have been growing efforts to use automated, data-driven approaches to find patterns of nonverbal behaviors in video corpora and even thereby discover new factors that have not been previously documented. However, there have been few studies that compare how the behaviors generated by different approaches are interpreted by people. In this paper, we present an evaluation study to compare the perception of nonverbal behaviors, more specifically head nods, generated by different approaches. Studies have shown that head nods serve a variety of communicative functions and that the head is in constant motion during speaking turns. To evaluate the different approaches of head nod generation, we asked human subjects to evaluate videos of a virtual agent displaying nods generated by a human, by a machine learning data-driven approach, or by a hand-crafted rule-based approach. Results show that there is a significant effect on the perception of head nods in terms of appropriate nod occurrence, especially between the data-driven approach and the rule-based approach.