Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
The Geometry of Algorithms with Orthogonality Constraints
SIAM Journal on Matrix Analysis and Applications
Detecting Faces in Images: A Survey
IEEE Transactions on Pattern Analysis and Machine Intelligence
Kernel Eigenfaces vs. Kernel Fisherfaces: Face Recognition Using Kernel Methods
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
A modified algorithm for generalized discriminant analysis
Neural Computation
The Journal of Machine Learning Research
Generalized Discriminant Analysis Using a Kernel Approach
Neural Computation
When Fisher meets Fukunaga-Koontz: A New Look at Linear Discriminants
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Application of the Karhunen-Loève Expansion to Feature Selection and Ordering
IEEE Transactions on Computers
Discriminant Subspace Analysis: A Fukunaga-Koontz Approach
IEEE Transactions on Pattern Analysis and Machine Intelligence
Generalizing discriminant analysis using the generalized singular value decomposition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Optimizing the kernel in the empirical feature space
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
Fukunaga-Koontz Transform (FKT) is a famous feature extraction method in statistical pattern recognition, which aims to find a set of vectors that have the best representative power for one class while the poorest representative power for the other class. Li and Savvides [1] propose a one-against-all strategy to deal with multi-class problems, in which the two-class FKT method can be directly applied to find the presentative vectors of each class. Motivated by the FKT method, in this paper we propose a new discriminant subspace analysis (DSA) method for the multi-class feature extraction problems. To solve DSA, we propose an iterative algorithm for the joint diagonalization (JD) problem. Finally, we generalize the linear DSA method to handle nonlinear feature extraction problems via the kernel trick. To demonstrate the effectiveness of the proposed method for pattern recognition problems, we conduct extensive experiments on real data sets and show that the proposed method outperforms most commonly used feature extraction methods.