Machine Learning
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Support vector machines applied to face recognition
Proceedings of the 1998 conference on Advances in neural information processing systems II
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Face Recognition by Support Vector Machines
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
The CMU Pose, Illumination, and Expression (PIE) Database
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
Journal of Cognitive Neuroscience
An introduction to kernel-based learning algorithms
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
We introduce an improved classification algorithm based on the concept of symmetric maximized minimal distance in subspace (SMMS). Given the training data of authentic samples and imposter samples in the feature space, our previous approach, SMMS, tried to identify a subspace in which all the authentic samples were projected onto the origin and all the imposter samples were far away from the origin. The optimality of the subspace was determined by maximizing the minimal distance between the origin and the imposter samples in the subspace. The generalized SMMS relaxes the constraint of fitting all the authentic samples to the origin in the subspace to achieve the optimality and considers the optimal direction of the linear support-vector machines (SVM) as a feasible solution in our optimization procedure to guarantee that our result is no worse than the linear SVM. We present a procedure to achieve such optimality and to identify the subspace and the decision boundary. Once the subspace is trained, the verification procedure is simple since we only need to project the test sample onto the subspace and compare it against the decision boundary. Using face authentication as an example, we show that the proposed algorithm outperforms the linear classifier based on SMMS and SVM. The proposed algorithm also applies to multimodal feature spaces. The features can come from any modalities, such as face images, voices, fingerprints, etc.