Three robust features extraction approaches for facial gender classification

  • Authors:
  • Mohamed Abdou Berbar

  • Affiliations:
  • Department of Computer Science, College of Computer and Information Sciences, King Saud University, Al Riyadh, KSA 11543

  • Venue:
  • The Visual Computer: International Journal of Computer Graphics
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

This research paper introduces three robust approaches for features extraction for gender classification. The first approach is based on using Discrete Cosine Transform (DCT) and consists of two different methods for calculating features values. The second approach is based on the extraction of texture features using the gray-level cooccurrence matrix (GLCM). The third approach is based on 2D-wavelet transform. The extracted features vectors are classified using SVM. For precise evaluation, the databases used for gender evaluation are based on images from the AT@T, Faces94, UMIST, and color FERET databases. K-fold cross validation is used in training the SVM. The accuracies of gender classification when using one of the two proposed DCT methods for features extraction are 98.6聽%, 99.97聽%, 99.90聽%, and 93.3聽% with 2-fold cross validation, and 98.93聽%, 100聽%, 99.9聽%, and 92.18聽% with 5-fold cross validation. The accuracies of GLCM texture features approach for facial gender classification are 98.8聽%, 99.6聽%, 100聽%, and 93.11聽%, for AT@T, Faces94, UMIST, and FERET, databases. The accuracies for all databases when using 2D-WT are ranging between 96.18聽% and 99.6聽% except FERET and its accuracy is 92聽%.