Communications of the ACM
Practical methods of optimization; (2nd ed.)
Practical methods of optimization; (2nd ed.)
The nature of statistical learning theory
The nature of statistical learning theory
SIAM Review
Principal component neural networks: theory and applications
Principal component neural networks: theory and applications
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Least Squares Support Vector Machine Classifiers
Neural Processing Letters
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Regularized principal manifolds
The Journal of Machine Learning Research
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Robust Principal Component Analysis with Adaptive Selection for Tuning Parameters
The Journal of Machine Learning Research
Neural Computation
A support vector machine formulation to PCA analysis and its kernel version
IEEE Transactions on Neural Networks
Approximations of the standard principal components analysis and kernel PCA
Expert Systems with Applications: An International Journal
Learning Linear and Nonlinear PCA with Linear Programming
Neural Processing Letters
Hi-index | 0.01 |
The aim of this paper is to learn a linear principal component using the nature of support vector machines (SVMs). To this end, a complete SVM-like framework of linear PCA (SVPCA) for deciding the projection direction is constructed, where new expected risk and margin are introduced. Within this framework, a new semi-definite programming problem for maximizing the margin is formulated and a new definition of support vectors is established. As a weighted case of regular PCA, our SVPCA coincides with the regular PCA if all the samples play the same part in data compression. Theoretical explanation indicates that SVPCA is based on a margin-based generalization bound and thus good prediction ability is ensured. Furthermore, the robust form of SVPCA with a interpretable parameter is achieved using the soft idea in SVMs. The great advantage lies in the fact that SVPCA is a learning algorithm without local minima because of the convexity of the semi-definite optimization problems. To validate the performance of SVPCA, several experiments are conducted and numerical results have demonstrated that their generalization ability is better than that of regular PCA. Finally, some existing problems are also discussed.