The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Support vector machines, reproducing kernel Hilbert spaces, and randomized GACV
Advances in kernel methods
Least Squares Support Vector Machine Classifiers
Neural Processing Letters
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
Model Selection and Error Estimation
Machine Learning
Ridge Regression Learning Algorithm in Dual Variables
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
The Journal of Machine Learning Research
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
A Compression Approach to Support Vector Model Selection
The Journal of Machine Learning Research
Multiple kernel learning, conic duality, and the SMO algorithm
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Accurate Error Bounds for the Eigenvalues of the Kernel Matrix
The Journal of Machine Learning Research
Preventing Over-Fitting during Model Selection via Bayesian Regularisation of the Hyper-Parameters
The Journal of Machine Learning Research
An efficient kernel matrix evaluation measure
Pattern Recognition
Spectral algorithms for supervised learning
Neural Computation
Support Vector Machines
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Accurate Probabilistic Error Bound for Eigenvalues of Kernel Matrix
ACML '09 Proceedings of the 1st Asian Conference on Machine Learning: Advances in Machine Learning
On Learning with Integral Operators
The Journal of Machine Learning Research
L2 regularization for learning kernels
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation
The Journal of Machine Learning Research
lp-Norm Multiple Kernel Learning
The Journal of Machine Learning Research
Learning kernels with upper bounds of leave-one-out error
Proceedings of the 20th ACM international conference on Information and knowledge management
Hi-index | 0.00 |
Kernel selection is one of the key issues both in recent research and application of kernel methods. This is usually done by minimizing either an estimate of generalization error or some other related performance measure. It is well known that a kernel matrix can be interpreted as an empirical version of a continuous integral operator, and its eigenvalues converge to the eigenvalues of integral operator. In this paper, we introduce new kernel selection criteria based on the eigenvalues perturbation of the integral operator. This perturbation quantifies the difference between the eigenvalues of the kernel matrix and those of the integral operator. We establish the connection between eigenvalues perturbation and generalization error. By minimizing the derived generalization error bounds, we propose the kernel selection criteria. Therefore the kernel chosen by our proposed criteria can guarantee good generalization performance. To compute the values of our criteria, we present a method to obtain the eigenvalues of integral operator via the Fourier transform. Experiments on benchmark datasets demonstrate that our kernel selection criteria are sound and effective.