Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Advances in kernel methods: support vector learning
Advances in kernel methods: support vector learning
A unifying review of linear Gaussian models
Neural Computation
Orthogonal series density estimation and the kernel eigenvalue problem
Neural Computation
A constrained EM algorithm for principal component analysis
Neural Computation
An Improved Algorithm for Kernel Principal Component Analysis
Neural Processing Letters
Letters: An unified EM algorithm for PCA and KPCA
Neurocomputing
Efficient tracking of the dominant eigenspace of a normalized kernel matrix
Neural Computation
Nonlinear Component Analysis for Large-Scale Data Set Using Fixed-Point Algorithm
ISNN 2009 Proceedings of the 6th International Symposium on Neural Networks: Advances in Neural Networks - Part III
Matrix-based kernel principal component analysis for large-scale data set
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Edge detection in the feature space
Image and Vision Computing
A Fast Algorithm for Updating and Downsizing the Dominant Kernel Principal Components
SIAM Journal on Matrix Analysis and Applications
Kernel principal component analysis for large scale data set
ICIC'06 Proceedings of the 2006 international conference on Intelligent Computing - Volume Part I
A fast feature extraction method for kernel 2DPCA
ICIC'06 Proceedings of the 2006 international conference on Intelligent Computing - Volume Part I
Hi-index | 0.00 |
The proposal of considering nonlinear principal component analysis as a kernel eigenvalue problem has provided an extremely powerful method of extracting nonlinear features for a number of classification and regression applications. Whereas the utilization of Mercer kernels makes the problem of computing principal components in, possibly, infinite-dimensional feature spaces tractable, there are still the attendant numerical problems of diagonalizing large matrices. In this contribution, we propose an expectation-maximization approach for performing kernel principal component analysis and show this to be a computationally efficient method, especially when the number of data points is large.