Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Kernel principal component analysis
Advances in kernel methods
Subspace classifier in the Hilbert space
Pattern Recognition Letters
Kernel PCA and de-noising in feature spaces
Proceedings of the 1998 conference on Advances in neural information processing systems II
IEEE Transactions on Pattern Analysis and Machine Intelligence
ICANN 96 Proceedings of the 1996 International Conference on Artificial Neural Networks
Kernel partial least squares regression in reproducing kernel hilbert space
The Journal of Machine Learning Research
Face Recognition Using Face-ARG Matching
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
CIMCA '05 Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce Vol-1 (CIMCA-IAWTIC'06) - Volume 01
Multi-category classification by kernel based nonlinear subspace method
ICASSP '99 Proceedings of the Acoustics, Speech, and Signal Processing, 1999. on 1999 IEEE International Conference - Volume 02
Occlusion invariant face recognition using selective LNMF basis images
ACCV'06 Proceedings of the 7th Asian conference on Computer Vision - Volume Part I
Cancer classification by kernel principal component self-regression
AI'06 Proceedings of the 19th Australian joint conference on Artificial Intelligence: advances in Artificial Intelligence
Hi-index | 0.00 |
Subspace classifiers are well-known in pattern recognition, which represent pattern classes by linear subspaces spanned by the class specific basis vectors through simple mathematical operations like SVD. Recently, kernel based subspace methods have been proposed to extend the functionalities by directly applying the Kernel Principal Component Analysis (KPCA). The projection variance in kernel space as applied in these earlier proposed kernel subspace methods, however, is not a trustworthy criteria for class discrimination and they simply fail in many recognition problems as we encountered in biometrics research. We address this issue by proposing a learning kernel subspace classifier which attempts to reconstruct data in input space through the kernel subspace projection. While the pre-image methods aiming at finding an approximate pre-image for each input by minimization of the reconstruction error in kernel space, we emphasize the problem of how to estimate a kernel subspace as a model for a specific class. Using the occluded face recognition as examples, our experimental results demonstrated the efficiency of the proposed method.