Vision-based production of personalized video
Image Communication
VideoTicket: detecting identity fraud attempts via audiovisual certificates and signatures
NSPW '07 Proceedings of the 2007 Workshop on New Security Paradigms
Hand-drawn face sketch recognition by humans and a PCA-based algorithm for forensic applications
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans - Special issue on recent advances in biometrics
Research frontier: deep machine learning--a new frontier in artificial intelligence research
IEEE Computational Intelligence Magazine
Method to evaluate pose variability in automatic face recognition performance
International Journal of Biometrics
Hi-index | 0.00 |
Face recognition technologies have seen dramatic improvements in performance over the past decade, and such systems are now widely used for security and commercial applications. Since recognizing faces is a task that humans are understood to be very good at, it is common to want to compare automatic face recognition (AFR) and human face recognition (HFR) in terms of biometric performance. This paper addresses this question by: 1) conducting verification tests on volunteers (HFR) and commercial AFR systems and 2) developing statistical methods to support comparison of the performance of different biometric systems. HFR was tested by presenting face-image pairs and asking subjects to classify them on a scale of ldquoSame,rdquo ldquoProbably Same,rdquo ldquoNot sure,rdquo ldquoProbably Different,rdquo and ldquoDifferentrdquo; the same image pairs were presented to AFR systems, and the biometric match score was measured. To evaluate these results, two new statistical evaluation techniques are developed. The first is a new way to normalize match-score distributions, where a normalized match score is calculated as a function of the angle from a representation of [false match rate, false nonmatch rate] values in polar coordinates from some center. Using this normalization, we develop a second methodology to calculate an average detection error tradeoff (DET) curve and show that this method is equivalent to direct averaging of DET data along each angle from the center. This procedure is then applied to compare the performance of the best AFR algorithms available to us in the years 1999, 2001, 2003, 2005, and 2006, in comparison to human scores. Results show that algorithms have dramatically improved in performance over that time. In comparison to the performance of the best AFR system of 2006, 29.2% of human subjects performed better, while 37.5% performed worse.