Multiple feature fusion by subspace learning
CIVR '08 Proceedings of the 2008 international conference on Content-based image and video retrieval
Cost-Effective Solution to Synchronized Audio-Visual Capture Using Multiple Sensors
AVSS '09 Proceedings of the 2009 Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance
Face active appearance modeling and speech acoustic information to recover articulation
IEEE Transactions on Audio, Speech, and Language Processing - Special issue on multimodal processing in speech-based interactions
Synchronization of olfaction-enhanced multimedia
IEEE Transactions on Multimedia - Special section on communities and media computing
Multiple feature fusion for social media applications
Proceedings of the 2010 ACM SIGMOD International Conference on Management of data
Maximising audiovisual correlation with automatic lip tracking and vowel based segmentation
BioID_MultiComm'09 Proceedings of the 2009 joint COST 2101 and 2102 international conference on Biometric ID management and multimodal communication
Audio-visual group recognition using diffusion maps
IEEE Transactions on Signal Processing
Formant position based weighted spectral features for emotion recognition
Speech Communication
Cost-effective solution to synchronised audio-visual data capture using multiple sensors
Image and Vision Computing
Detecting motion synchrony by video tubes
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Feature-level fusion of fingerprint and finger-vein for personal identification
Pattern Recognition Letters
Hi-index | 0.00 |
It is well-known that early integration (also called data fusion) is effective when the modalities are correlated, and late integration (also called decision or opinion fusion) is optimal when modalities are uncorrelated. In this paper, we propose a new multimodal fusion strategy for open-set speaker identification using a combination of early and late integration following canonical correlation analysis (CCA) of speech and lip texture features. We also propose a method for high precision synchronization of the speech and lip features using CCA prior to the proposed fusion. Experimental results show that i) the proposed fusion strategy yields the best equal error rates (EER), which are used to quantify the performance of the fusion strategy for open-set speaker identification, and ii) precise synchronization prior to fusion improves the EER; hence, the best EER is obtained when the proposed synchronization scheme is employed together with the proposed fusion strategy. We note that the proposed fusion strategy outperforms others because the features used in the late integration are truly uncorrelated, since they are output of the CCA analysis.