Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Kernel PCA and de-noising in feature spaces
Proceedings of the 1998 conference on Advances in neural information processing systems II
Journal of Intelligent Information Systems
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Kernel Eigenfaces vs. Kernel Fisherfaces: Face Recognition Using Kernel Methods
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
Kernel independent component analysis
The Journal of Machine Learning Research
Iterative Kernel Principal Component Analysis for Image Modeling
IEEE Transactions on Pattern Analysis and Machine Intelligence
An Improved Algorithm for Kernel Principal Component Analysis
Neural Processing Letters
An Expectation-Maximization Approach to Nonlinear Component Analysis
Neural Computation
Neural Networks - 2005 Special issue: IJCNN 2005
Hi-index | 0.00 |
Kernel principal component analysis (KPCA) has provided an extremely powerful approach to extracting nonlinear features via kernel trick, and it has been suggested for a number of applications. Whereas the nonlinearity can be allowed by the utilization of Mercer kernels, the standard KPCA could only process limited number of training samples. For large scale data set, it may suffer from computational problem of diagonalizing large matrices, and occupy large storage space. In this paper, by choosing a subset of the entire training samples using Gram-Schmidt orthonormalization and incomplete Cholesky decomposition, we formulate KPCA as another eigenvalue problem of matrix whose size is much smaller than that of the kernel matrix. The theoretical analysis and experimental results on both artificial and real data have shown the advantages of the proposed method for performing KPCA in terms of computational efficiency and storage space, especially when the number of data points is large.