Analysis of emotion recognition using facial expressions, speech and multimodal information
Proceedings of the 6th international conference on Multimodal interfaces
Fully Automatic Facial Action Unit Detection and Temporal Analysis
CVPRW '06 Proceedings of the 2006 Conference on Computer Vision and Pattern Recognition Workshop
Blur Insensitive Texture Classification Using Local Phase Quantization
ICISP '08 Proceedings of the 3rd international conference on Image and Signal Processing
Audio-visual based emotion recognition-a new approach
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
A Dynamic Texture-Based Approach to Recognition of Facial Actions and Their Temporal Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
SIFT Flow: Dense Correspondence across Scenes and Its Applications
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Image Processing
Emotion recognition in the wild challenge 2013
Proceedings of the 15th ACM on International conference on multimodal interaction
Hi-index | 0.00 |
Human Computer Interaction is an upcoming scientific field which aims at inter-communication between humans and computers. A major element of this field is Human Emotion Recognition. The most expressive way humans display emotions is through facial expressions. Traditionally, emotion recognition has been performed on laboratory controlled data. While undoubtedly worthwhile at the time, such lab controlled data poorly represents the environment and conditions faced in real-world situations. With the increase in the number of video clips online, it is worthwhile to explore the performance of emotion recognition methods that work 'in the wild' .This work mainly focuses on automatic emotion recognition in a wild video sample. In this task, we have worked on the problem of human emotion recognition using a combination of video features and audio features. The technique that we have utilized for emotion detection involves a blend of Optical flow, Gabor Filtering, few other facial features and audio features. Training and Classification is performed using Support Vector Machine-Hidden Markov Model (HMM). The unique thing about our methodology is that it produces better results for some particular class of emotions as compared to the baseline score in the case of wild emotion dataset with an overall accuracy of 20.51% on the test set.