Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
A fast fixed-point algorithm for independent component analysis
Neural Computation
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Kernel PCA and de-noising in feature spaces
Proceedings of the 1998 conference on Advances in neural information processing systems II
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Iterative Kernel Principal Component Analysis for Image Modeling
IEEE Transactions on Pattern Analysis and Machine Intelligence
An Improved Algorithm for Kernel Principal Component Analysis
Neural Processing Letters
An Expectation-Maximization Approach to Nonlinear Component Analysis
Neural Computation
Fast principal component analysis using fixed-point algorithm
Pattern Recognition Letters
Fast Iterative Kernel Principal Component Analysis
The Journal of Machine Learning Research
A feature selection method using fixed-point algorithm for DNA microarray gene expression data
International Journal of Knowledge-based and Intelligent Engineering Systems
Hi-index | 0.00 |
Nonlinear component analysis is a popular nonlinear feature extraction method. It generally uses eigen-decomposition technique to extract the principal components. But the method is infeasible for large-scale data set because of the storage and computational problem. To overcome these disadvantages, an efficient iterative method of computing kernel principal components based on fixed-point algorithm is proposed.The kernel principle components can be iteratively computed without the eigen-decomposition. The space and time complexity of proposed method is reduced to o (m ) and o (m 2), respectively, where m is the number of samples. More important, it still can be used even if traditional eigen-decomposition technique cannot be applied when faced with the extremely large-scale data set. The effectiveness of proposed method is validated from experimental results.