Application of the Karhunen-Loeve Procedure for the Characterization of Human Faces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Journal of Mathematical Psychology
Learning Gender with Support Faces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Combination of support vector machines using genetic programming
International Journal of Hybrid Intelligent Systems
A Framework for Multi-view Gender Classification
Neural Information Processing
ISBRA'07 Proceedings of the 3rd international conference on Bioinformatics research and applications
Feature selection for efficient gender classification
NN'10/EC'10/FS'10 Proceedings of the 11th WSEAS international conference on nural networks and 11th WSEAS international conference on evolutionary computing and 11th WSEAS international conference on Fuzzy systems
Optimal depth estimation by combining focus measures using genetic programming
Information Sciences: an International Journal
Gender discriminating models from facial surface normals
Pattern Recognition
Computational Biology and Chemistry
Energy-based coefficient selection for digital watermarking in wavelet domain
International Journal of Innovative Computing and Applications
Hi-index | 0.00 |
In this paper, we have investigated the problem of gender classification using frontal facial images. Four different classifiers, namely K-means, k-nearest neighbors, Linear Discriminant Analysis and Mahalanobis Distance Based classifiers are compared. Receiver operating characteristics (ROC) curve along with the area under the convex hull (AUCH) have been utilized as the performance measures of the classifiers at different feature subsets. To measure the overall performance of a classifier with single scalar value, the new scheme of finding the area under the convex hull of AUCH of ROC curves (AUCH of AUCHS) is proposed. It has been observed that, when the number of macro features is increased beyond 5, the AUCH saturates and even decreases for some classifiers, illustrating the curse of dimensionality. We then used genetic programming to combine classifiers and thus evolved an optimum combined classifier (OCC), producing better performance than the individual classifiers. We found that using only two features, the OCC has comparable performance to that of original classifier using 20 macro features. It produces true positive rate values as high as 0.94 corresponding to false positive rate as low as 0.15 for 1: 3 train to testing ratio. We also observed that heterogeneous combination of classifiers is more promising than the homogenous combination.