Graph Embedding and Extensions: A General Framework for Dimensionality Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Matrix-pattern-oriented Ho-Kashyap classifier with regularization learning
Pattern Recognition
Local face sketch synthesis learning
Neurocomputing
A simplified GLRAM algorithm for face recognition
Neurocomputing
Locating nose-tips and estimating head poses in images by tensorposes
IEEE Transactions on Circuits and Systems for Video Technology
Recognizing face or object from a single image: linear vs. kernel methods on 2d patterns
SSPR'06/SPR'06 Proceedings of the 2006 joint IAPR international conference on Structural, Syntactic, and Statistical Pattern Recognition
Hi-index | 0.00 |
It was prescriptive that an image matrix was transformed into a vector before the kernel-based subspace learning. In this paper, we take the Kernel Discriminant Analysis (KDA) algorithm as an example to perform kernel analysis on 2D image matrices directly. First, each image matrix is decomposed as the product of two orthogonal matrices and a diagonal one by using Singular Value Decomposition; then an image matrix is expanded to be of higher or even infinite dimensions by applying the kernel trick on the column vectors of the two orthogonal matrices; finally, two coupled discriminative kernel subspaces are iteratively learned for dimensionality reduction by optimizing the Fisher criterion measured by Frobenius norm. The derived algorithm, called Coupled Kernel Discriminant Analysis (CKDA), effectively utilizes the underlying spatial structure of objects and the discriminating information is encoded in two coupled kernel subspaces respectively. The experiments on real face databases compared with KDA and Fisherface validate the effectiveness of CKDA.