Phase-based disparity measurement
CVGIP: Image Understanding
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Face recognition: A literature survey
ACM Computing Surveys (CSUR)
Fully Automatic Facial Action Recognition in Spontaneous Behavior
FGR '06 Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition
A Cortical Mechanism for Triggering Top-Down Facilitation in Visual Object Recognition
Journal of Cognitive Neuroscience
Dynamic face recognition: From human to machine vision
Image and Vision Computing
Pose-Invariant Facial Expression Recognition Using Variable-Intensity Templates
International Journal of Computer Vision
Face recognition by cortical multi-scale line and edge representations
ICIAR'06 Proceedings of the Third international conference on Image Analysis and Recognition - Volume Part II
Facial action recognition for facial expression analysis from static face images
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Contour detection based on nonclassical receptive field inhibition
IEEE Transactions on Image Processing
Cortical 3D Face and Object Recognition Using 2D Projections
International Journal of Creative Interfaces and Computer Graphics
A biological and real-time framework for hand gestures and head poses
UAHCI'13 Proceedings of the 7th international conference on Universal Access in Human-Computer Interaction: design methods, tools, and interaction techniques for eInclusion - Volume Part I
Hi-index | 0.00 |
Face-to-face communications between humans involve emotions, which often are unconsciously conveyed by facial expressions and body gestures. Intelligent human-machine interfaces, for example in cognitive robotics, need to recognize emotions. This paper addresses facial expressions and their neural correlates on the basis of a model of the visual cortex: the multi-scale line and edge coding. The recognition model links the cortical representation with Paul Ekman's Action Units which are related to the different facial muscles. The model applies a top-down categorization with trends and magnitudes of displacements of the mouth and eyebrows based on expected displacements relative to a neutral expression. The happy vs. not-happy categorization yielded a correct recognition rate of 91%, whereas final recognition of the six expressions happy, anger, disgust, fear, sadness and surprise resulted in a rate of 78%.