Creating Speech-Synchronized Animation
IEEE Transactions on Visualization and Computer Graphics
Establishing and maintaining long-term human-computer relationships
ACM Transactions on Computer-Human Interaction (TOCHI)
Psychophysical evaluation of animated facial expressions
APGV '05 Proceedings of the 2nd symposium on Applied perception in graphics and visualization
International Journal of Human-Computer Studies
The evaluation of stylized facial expressions
APGV '06 Proceedings of the 3rd symposium on Applied perception in graphics and visualization
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Generation and visualization of emotional states in virtual characters
Computer Animation and Virtual Worlds - CASA'2008 Special Issue
Taking the time to care: empowering low health literacy hospital patients with virtual nurse agents
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Expression of Emotions Using Wrinkles, Blushing, Sweating and Tears
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
Impact of Expressive Wrinkles on Perception of a Virtual Character's Facial Expressions of Emotions
IVA '09 Proceedings of the 9th International Conference on Intelligent Virtual Agents
Modeling and characterizing user interfaces at the electronic visualization laboratory
Proceedings of the 4th Mexican Conference on Human-Computer Interaction
Hi-index | 0.00 |
The human ability to express and recognize emotions plays an important role in face-to-face communication, and as technology advances it will be increasingly important for computer-generated avatars to be similarly expressive. In this paper, we present the detailed development process for the Lifelike Responsive Avatar Framework (LRAF) and a prototype application for modeling a specific individual to analyze the effectiveness of expressive avatars. In particular, the goals of our pilot study (n = 1,744) are to determine whether the specific avatar being developed is capable of conveying emotional states (Ekmans six classic emotions) via facial features and whether a realistic avatar is an appropriate vehicle for conveying the emotional states accompanying spoken information. The results of this study show that happiness and sadness are correctly identified with a high degree of accuracy while the other four emotional states show mixed results.