Designing Sociable Robots
A Closed-Form Solution to Non-Rigid Shape and Motion Recovery
International Journal of Computer Vision
An interactive facial expression generation system
Multimedia Tools and Applications
Non-rigid face tracking with enforced convexity and local appearance consistency constraint
Image and Vision Computing
Creating a Photoreal Digital Actor: The Digital Emily Project
CVMP '09 Proceedings of the 2009 Conference for Visual Media Production
Registration Invariant Representations for Expression Detection
DICTA '10 Proceedings of the 2010 International Conference on Digital Image Computing: Techniques and Applications
Deformable Model Fitting by Regularized Landmark Mean-Shift
International Journal of Computer Vision
A Blueprint for Affective Computing: A sourcebook and manual
A Blueprint for Affective Computing: A sourcebook and manual
Hi-index | 0.00 |
To design robots or embodied conversational agents that can accurately display facial expressions indicating an emotional state, we need technology to produce those facial expressions, and research that investigates the relationship between those technologies and human social perception of those artificial faces. Our starting point is assessing human perception of core facial information: Moving dots representing the facial landmarks, i.e., the locations and movements of the crucial parts of a face. Earlier research suggested that participants can relatively accurately identity facial expressions when all they can see of a real human full face are moving white painted dots representing the facial landmarks (although less accurate than recognizing full faces). In the current study we investigated the accuracy of recognition of emotions expressed by comparable facial landmarks (compared to accuracy of recognition of emotions expressed by full faces), but now used face-tracking software to produce the facial landmarks. In line with earlier findings, results suggested that participants could accurately identify emotions expressed by the facial landmarks (though less accurately than those expressed by full faces). Thereby, these results provide a starting point for further research on the fundamental characteristics of technology (AI methods) producing facial emotional expressions and their evaluation by human users.