Recognizing Facial Expressions in Image Sequences Using Local Parameterized Models of Image Motion
International Journal of Computer Vision
Efficient Region Tracking With Parametric Models of Geometry and Illumination
IEEE Transactions on Pattern Analysis and Machine Intelligence
Automatic Analysis of Facial Expressions: The State of the Art
IEEE Transactions on Pattern Analysis and Machine Intelligence
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
The production and recognition of emotions in speech: features and algorithms
International Journal of Human-Computer Studies - Application of affective computing in humanComputer interaction
Information fusion in biometrics
Pattern Recognition Letters - Special issue: Audio- and video-based biometric person authentication (AVBPA 2001)
Optimal multimodal fusion for multimedia data analysis
Proceedings of the 12th annual ACM international conference on Multimedia
Toward Integrating Feature Selection Algorithms for Classification and Clustering
IEEE Transactions on Knowledge and Data Engineering
The eNTERFACE'05 Audio-Visual Emotion Database
ICDEW '06 Proceedings of the 22nd International Conference on Data Engineering Workshops
2005 Special Issue: Emotion recognition in human-computer interaction
Neural Networks - Special issue: Emotion and brain
Toward multimodal fusion of affective cues
Proceedings of the 1st ACM international workshop on Human-centered multimedia
How emotion is made and measured
International Journal of Human-Computer Studies
A robust multimodal approach for emotion recognition
Neurocomputing
Proceedings of the 1st international conference on PErvasive Technologies Related to Assistive Environments
A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Interrelation Between Speech and Facial Gestures in Emotional Utterances: A Single Subject Study
IEEE Transactions on Audio, Speech, and Language Processing
Audio-visual speech modeling for continuous speech recognition
IEEE Transactions on Multimedia
Event detection in field sports video using audio-visual features and a support vector Machine
IEEE Transactions on Circuits and Systems for Video Technology
Segment-based emotion recognition from continuous Mandarin Chinese speech
Computers in Human Behavior
A boosting approach to multiview classification with cooperation
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part II
Clustering Persian viseme using phoneme subspace for developing visual speech application
Multimedia Tools and Applications
Content-Based Multimedia Retrieval Using Feature Correlation Clustering and Fusion
International Journal of Multimedia Data Engineering & Management
Hi-index | 0.00 |
A multimedia content is composed of several streams that carry information in audio, video or textual channels. Classification and clustering multimedia contents require extraction and combination of information from these streams. The streams constituting a multimedia content are naturally different in terms of scale, dynamics and temporal patterns. These differences make combining the information sources using classic combination techniques difficult. We propose an asynchronous feature level fusion approach that creates a unified hybrid feature space out of the individual signal measurements. The target space can be used for clustering or classification of the multimedia content. As a representative application, we used the proposed approach to recognize basic affective states from speech prosody and facial expressions. Experimental results over two audiovisual emotion databases with 42 and 12 subjects revealed that the performance of the proposed system is significantly higher than the unimodal face based and speech based systems, as well as synchronous feature level and decision level fusion approaches.