Handbook of Biometrics
Performance Evaluation Model for the Face Recognition System
FBIT '07 Proceedings of the 2007 Frontiers in the Convergence of Bioscience and Information Technologies
Redesigning remote system administration paradigms for enhanced security and flexibility
Computer Standards & Interfaces
Biometric Systems and Data Protection Legislation in Germany
IIH-MSP '08 Proceedings of the 2008 International Conference on Intelligent Information Hiding and Multimedia Signal Processing
Fingerprint and On-Line Signature Verification Competitions at ICB 2009
ICB '09 Proceedings of the Third International Conference on Advances in Biometrics
Implementation of BioAPI conformance test suite using BSP testing model
WISA'07 Proceedings of the 8th international conference on Information security applications
A proposal for automating investigations in live forensics
Computer Standards & Interfaces
On Generation and Analysis of Synthetic Iris Images
IEEE Transactions on Information Forensics and Security
Hi-index | 0.00 |
It is difficult to generate significant results when evaluating algorithms developed in biometrics. Additionally, when third parties, such as system integrators, want to compare results among different algorithm providers, they cannot do so easily for several reasons: difficulty accessing large databases, inability to exchange biometric data among researchers due to data protection laws in some countries, or the lack comprehensive and standardised evaluation reports. This paper presents a new performance evaluation system for biometric systems that is secure, automatic and remote. This system has been developed using current standards developed within ISO/IEC JTC1/SC37 for data Formats, Application Program Interfaces (APIs) and evaluation methodology. Standardised technology is able to provide developers in biometrics and third parties with a way to perform comprehensive evaluations remotely and with 24/7 availability without compromising the privacy of the individuals included in the test crew. The solution described here offers the developers the ability to evaluate large databases that are stored in a secured centralised server. As this system is modality-independent, researchers can use the same protocol to perform different evaluations, and therefore lower the overhead costs for testing purposes. Additionally, such protocols can be plugged directly into end-user applications, minimising technology transfer costs. The system is described by block diagrams as well as flowcharts.