CCS '99 Proceedings of the 6th ACM conference on Computer and communications security
An introduction to variable and feature selection
The Journal of Machine Learning Research
Biometric User Authentication for IT Security: From Fundamentals to Handwriting (Advances in Information Security)
EURASIP Journal on Advances in Signal Processing
Advanced Studies on Reproducibility of Biometric Hashes
Biometrics and Identity Management
Biometric recognition using feature selection and combination
AVBPA'05 Proceedings of the 5th international conference on Audio- and Video-Based Biometric Person Authentication
Protecting Biometric Templates With Sketch: Theory and Practice
IEEE Transactions on Information Forensics and Security - Part 2
CMS'11 Proceedings of the 12th IFIP TC 6/TC 11 international conference on Communications and multimedia security
Feature selection on handwriting biometrics: security aspects of artificial forgeries
CMS'12 Proceedings of the 13th IFIP TC 6/TC 11 international conference on Communications and Multimedia Security
Hi-index | 0.00 |
Biometric cryptosystems extend the user authentication functionality of usual biometric systems with the ability to generate robust stable values (also called biometric hashes) from variable biometric data. This work addresses a biometric hash algorithm applied to handwriting data and investigates the performance of both user authentication and hash generation scenarios. In order to improve the hash generation performance, some feature selection approaches are proposed. The intelligent reduction of features leads not only to a better ratio of collision/reproduction rates, but also improves equal error rates in user authentication scenario. Additionally, the parameterization of biometric hash algorithm is discussed. It has been shown that different quantization parameters as well as different features should be selected to achieve better performance rates in both scenarios. For the best semantic, symbol, the EER is improved from 8.30% to 5.27% and the CRR from 11.20% to 6.32%. Finally, the almost useful and needless features are figured out e.g. only 2 features are selected for every semantic in both scenarios and 10 features are never selected.