Information fusion in biometrics
Pattern Recognition Letters - Special issue: Audio- and video-based biometric person authentication (AVBPA 2001)
Communications of the ACM - Multimodal interfaces that flex, adapt, and persist
Likelihood Ratio-Based Biometric Score Fusion
IEEE Transactions on Pattern Analysis and Machine Intelligence
Iris segmentation using geodesic active contours
IEEE Transactions on Information Forensics and Security - Special issue on electronic voting
Score normalization in multimodal biometric systems
Pattern Recognition
Exploiting global and local decisions for multimodal biometrics verification
IEEE Transactions on Signal Processing - Part II
IEEE Transactions on Circuits and Systems for Video Technology
Improving classification with class-independent quality measures: Q-stack in face verification
ICB'07 Proceedings of the 2007 international conference on Advances in Biometrics
The Good, the Bad, and the Ugly Face Challenge Problem
Image and Vision Computing
When kids' toys breach mobile phone security
Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security
Hi-index | 0.00 |
Recent research in biometrics has suggested the existence of the "Biometric Menagerie" in which weak users contribute disproportionately to the error rate (FAR and FRR) of a biometric system. The aim of this work is to utilize this observation to design a multibiometric system where information is consolidated on a user-specific basis. To facilitate this, the users in a database are characterized into multiple categories and only users belonging to weak categories are required to provide additional biometric information. The contribution of this work lies in (a) the design of a selective fusion scheme where fusion is invoked only for a subset of users, and (b) evaluating the performance of such a scheme on two public datasets. Experiments on the multi-unit CASIA V3 iris database and multi-unit WVU fingerprint database indicate that selective fusion, as defined in this work, improves overall matching accuracy while potentially reducing overall computational time. This has positive implications in a large-scale system where the throughput can be substantially increased without compromising the verification accuracy of the system.