Biometrics, Personal Identification in Networked Society: Personal Identification in Networked Society
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Communications of the ACM - Multimodal interfaces that flex, adapt, and persist
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Using AUC and Accuracy in Evaluating Learning Algorithms
IEEE Transactions on Knowledge and Data Engineering
Handbook of Multibiometrics (International Series on Biometrics)
Handbook of Multibiometrics (International Series on Biometrics)
Score selection techniques for fingerprint multi-modal biometric authentication
ICIAP'05 Proceedings of the 13th international conference on Image Analysis and Processing
Combining multiple matchers for fingerprint verification: a case study in FVC2004
ICIAP'05 Proceedings of the 13th international conference on Image Analysis and Processing
Dynamic Score Combination: A Supervised and Unsupervised Score Combination Method
MLDM '09 Proceedings of the 6th International Conference on Machine Learning and Data Mining in Pattern Recognition
Hi-index | 0.00 |
In the biometric field, different experts are combined to improve the system reliability, as in many application the performance attained by individual experts (i.e., different sensors, or processing algorithms) does not provide the required reliability. However, there is no guarantee that the combination of any ensemble of experts provides superior performance than those of individual experts. Thus, an open problem in multiple biometric system is the selection of experts to combine, provided that a bag of experts for the problem at hand are available. In this paper we present an extensive experimental evaluation of four combination methods, i.e. the Mean rule, the Product rule, the Dynamic Score Selection technique, and a linear combination based on the Linear Discriminant Analysis. The performance of combination have been evaluated by the Area Under the Curve (AUC), and the Equal Error Rate (EER). Then, four measures have been used to characterise the performance of the individual experts included in each ensemble, namely the AUC, the EER, and two measures of class separability, i.e., the d' and an integral separability measure. The experimental results clearly pointed out that the larger the d' of individual experts, the higher the performance that can be attained by the combination of experts.