Matrix analysis
Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
The nature of statistical learning theory
The nature of statistical learning theory
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
AI Game Programming Wisdom
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
Kernel Nearest-Neighbor Algorithm
Neural Processing Letters
Ridge Regression Learning Algorithm in Dual Variables
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Kernel-based nonlinear blind source separation
Neural Computation
Kernel partial least squares regression in reproducing kernel hilbert space
The Journal of Machine Learning Research
Efficient svm training using low-rank kernel representations
The Journal of Machine Learning Research
Kernel independent component analysis
The Journal of Machine Learning Research
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Face Recognition Using Laplacianfaces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Predictive low-rank decomposition for kernel methods
ICML '05 Proceedings of the 22nd international conference on Machine learning
Generalized Discriminant Analysis Using a Kernel Approach
Neural Computation
Feature selection in a kernel space
Proceedings of the 24th international conference on Machine learning
Neighborhood MinMax projections
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Semi-supervised bilinear subspace learning
IEEE Transactions on Image Processing
Face Recognition Using Spatially Constrained Earth Mover's Distance
IEEE Transactions on Image Processing
Reconstruction and Recognition of Tensor-Based Objects With Concurrent Subspaces Analysis
IEEE Transactions on Circuits and Systems for Video Technology
Convergent 2-D Subspace Learning With Null Space Analysis
IEEE Transactions on Circuits and Systems for Video Technology
Mercer kernel-based clustering in feature space
IEEE Transactions on Neural Networks
Face recognition using kernel direct discriminant analysis algorithms
IEEE Transactions on Neural Networks
Foley-Sammon optimal discriminant vectors using kernel approach
IEEE Transactions on Neural Networks
Efficient and robust feature extraction by maximum margin criterion
IEEE Transactions on Neural Networks
Face recognition using kernel scatter-difference-based discriminant analysis
IEEE Transactions on Neural Networks
Fast neighborhood component analysis
Neurocomputing
Twin Mahalanobis distance-based support vector machines for pattern recognition
Information Sciences: an International Journal
Robust principal component analysis with non-greedy l1-norm maximization
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Classifier-specific intermediate representation for multimedia tasks
Proceedings of the 2nd ACM International Conference on Multimedia Retrieval
Early active learning via robust representation and structured sparsity
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Soft label based Linear Discriminant Analysis for image recognition and retrieval
Computer Vision and Image Understanding
Hi-index | 0.01 |
In this paper, a general kernelization framework for learning algorithms is proposed via a two-stage procedure, i.e., transforming data by kernel principal component analysis (KPCA), and then directly performing the learning algorithm with the transformed data. It is worth noting that although a very few learning algorithms were also kernelized by this procedure before, why and under what condition this procedure is feasible have not been further studied. In this paper, we explicitly present this kernelization framework, and give a rigorous justification to reveal that under some mild conditions, the kernelization under this framework is equivalent to traditional kernel method. We show that these mild conditions are usually satisfied in most of learning algorithms. Therefore, most of learning algorithms can be kernelized under this framework without having to reformulate it into inner product form, which is a common yet vital step in traditional kernel methods. Enlightened by this framework, we also propose a novel kernel method based on the low-rank KPCA, which could be used to remove the noise in the feature space, speed up the kernel algorithm and improve the numerical stability for the kernel algorithm. Experiments are presented to verify the validity and effectiveness of the proposed methods.