Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Principles of Neurocomputing for Science and Engineering
Principles of Neurocomputing for Science and Engineering
An Expectation-Maximization Approach to Nonlinear Component Analysis
Neural Computation
Nonlinear Component Analysis for Large-Scale Data Set Using Fixed-Point Algorithm
ISNN 2009 Proceedings of the 6th International Symposium on Neural Networks: Advances in Neural Networks - Part III
Matrix-based kernel principal component analysis for large-scale data set
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Edge detection in the feature space
Image and Vision Computing
Kernel principal component analysis for large scale data set
ICIC'06 Proceedings of the 2006 international conference on Intelligent Computing - Volume Part I
A fast feature extraction method for kernel 2DPCA
ICIC'06 Proceedings of the 2006 international conference on Intelligent Computing - Volume Part I
An improved kernel principal component analysis for large-scale data set
ISNN'10 Proceedings of the 7th international conference on Advances in Neural Networks - Volume Part II
Extension of a Kernel-Based Classifier for Discriminative Spoken Keyword Spotting
Neural Processing Letters
Hi-index | 0.01 |
Kernel principal component analysis (KPCA), introduced by Schölkopf et al., is a nonlinear generalization of the popular principal component analysis (PCA) via the kernel trick. KPCA has shown to be a very powerful approach of extracting nonlinear features for classification and regression applications. However, the standard KPCA algorithm (Schölkopf et al., 1998, Neural Computation 10, 1299--1319) may suffer from computational problem for large scale data set. To overcome these drawbacks, we propose an efficient training algorithm in this paper, and show that this approach is of much more computational efficiency compared to the previous ones for KPCA.