BEAT: the Behavior Expression Animation Toolkit
Proceedings of the 28th annual conference on Computer graphics and interactive techniques
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Conditional Random Fields for Contextual Human Motion Recognition
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
IVA '07 Proceedings of the 7th international conference on Intelligent Virtual Agents
Predicting Listener Backchannels: A Probabilistic Multimodal Approach
IVA '08 Proceedings of the 8th international conference on Intelligent Virtual Agents
Learning a model of speaker head nods using gesture corpora
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 1
Media Equation Revisited: Do Users Show Polite Reactions towards an Embodied Agent?
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
IVA'10 Proceedings of the 10th international conference on Intelligent virtual agents
Nonverbal behavior generator for embodied conversational agents
IVA'06 Proceedings of the 6th international conference on Intelligent Virtual Agents
Rigid Head Motion in Expressive Speech Animation: Analysis and Synthesis
IEEE Transactions on Audio, Speech, and Language Processing
Predicting Speaker Head Nods and the Effects of Affective Information
IEEE Transactions on Multimedia
Designing effective multimodal behaviors for robots: a data-driven perspective
Proceedings of the 15th ACM on International conference on multimodal interaction
Learning-based modeling of multimodal behaviors for humanlike robots
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
Virtual agents are autonomous software characters that support social interactions with human users. With the emergence of better graphical representation and control over the virtual agent's embodiment, communication through nonverbal behaviors has become an active research area. Researchers have taken different approaches to author the behaviors of virtual agents. In this work, we present our machine learning-based approach to model nonverbal behaviors, in which we explore several different learning techniques (HMM, CRF, LDCRF) to predict speaker's head nods and eyebrow movements. Quantitative measurements show that LDCRF yields the best learning rate for both head nod and eyebrow movements. An evaluation study was also conducted to compare the behaviors generated by the Machine Learning-based models described in this paper to a Literature-based model.