A face-house paradigm for architectural scene analysis
CSTST '08 Proceedings of the 5th international conference on Soft computing as transdisciplinary science and technology
Impact of implicit and explicit affective labeling on a recommender system's performance
UMAP'11 Proceedings of the 19th international conference on Advances in User Modeling
Emotion-aware assistive system for humanistic care based on the orange computing concept
Applied Computational Intelligence and Soft Computing - Special issue on Awareness Science and Engineering
Hi-index | 0.00 |
Machine recognition of human emotional state is an important component for efficient human-computer interaction. The majority of existing works address this problem by utilizing audio signals alone, or visual information only. In this paper, we explore a systematic approach for recognition of human emotional state from audiovisual signals. The audio characteristics of emotional speech are represented by the extracted prosodic, Mel-frequency Cepstral Coefficient (MFCC), and formant frequency features. A face detection scheme based on HSV color model is used to detect the face from the background. The visual information is represented by Gabor wavelet features. We perform feature selection by using a stepwise method based on Mahalanobis distance. The selected audiovisual features are used to classify the data into their corresponding emotions. Based on a comparative study of different classification algorithms and specific characteristics of individual emotion, a novel multiclassifier scheme is proposed to boost the recognition performance. The feasibility of the proposed system is tested over a database that incorporates human subjects from different languages and cultural backgrounds. Experimental results demonstrate the effectiveness of the proposed system. The multiclassifier scheme achieves the best overall recognition rate of 82.14%.