Facial expression recognition using geometric and appearance features
Proceedings of the 4th International Conference on Internet Multimedia Computing and Service
Hi-index | 0.00 |
A novel method that fuses texture and shape information to achieve Facial Action Unit (FAU) recognition from video sequences is proposed. In order to extract the texture information, a subspace method based on Discriminant Nonnegative Matrix Factorization (DNMF) is applied on the difference images of the video sequence, calculated taking under consideration the neutral and the most expressive frame, to extract the desired classication label. The shape information consists of the deformed Candide facial grid (more specifically the grid node displacements between the neutral and the most expressive facial expression frame) that corresponds to the facial expression depicted in the video sequence. The shape information is afterwards classified using a two-class Support Vector Machine(SVM) system. The fusion of texture and shape information isperformed using Median Radial Basis Functions (MRBFs) NeuralNetworks (NNs) in order to detect the set of present FAUs.The accuracy achieved in the Cohn-Kanade database is equalto 92.1% when recognizing the 17 FAUs that are responsible for facial expression development.