Information theoretic feature extraction for audio-visual speech recognition
IEEE Transactions on Signal Processing
Multimodal biometric human recognition for perceptual human-computer interaction
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
How to handle missing data in robust multi-biometrics verification
International Journal of Biometrics
Spatiotemporal analysis of human activities for biometric authentication
Computer Vision and Image Understanding
A new hybrid and dynamic fusion of multiple experts for intelligent porch system
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
Information about person identity is multimodal. Yet, most person-recognition systems limit themselves to only a single modality, such as facial appearance. With a view to exploiting the complementary nature of different modes of information and increasing pattern recognition robustness to test signal degradation, we developed a multiple expert biometric person identification system that combines information from three experts: audio, visual speech, and face. The system uses multimodal fusion in an automatic unsupervised manner, adapting to the local performance (at the transaction level) and output reliability of each of the three experts. The expert weightings are chosen automatically such that the reliability measure of the combined scores is maximized. To test system robustness to train/test mismatch, we used a broad range of acoustic babble noise and JPEG compression to degrade the audio and visual signals, respectively. Identification experiments were carried out on a 248-subject subset of the XM2VTS database. The multimodal expert system outperformed each of the single experts in all comparisons. At severe audio and visual mismatch levels tested, the audio, mouth, face, and tri-expert fusion accuracies were 16.1%, 48%, 75%, and 89.9%, respectively, representing a relative improvement of 19.9% over the best performing expert