Machine Learning
Scale-sensitive dimensions, uniform convergence, and learnability
Journal of the ACM (JACM)
Dynamically adapting kernels in support vector machines
Proceedings of the 1998 conference on Advances in neural information processing systems II
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
The covering number in learning theory
Journal of Complexity
On the influence of the kernel on the consistency of support vector machines
The Journal of Machine Learning Research
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Regularized multi--task learning
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Support Vector Machine Soft Margin Classifiers: Error Analysis
The Journal of Machine Learning Research
Model Selection for Regularized Least-Squares Algorithm in Learning Theory
Foundations of Computational Mathematics
Learning Theory: An Approximation Theory Viewpoint (Cambridge Monographs on Applied & Computational Mathematics)
Multi-kernel regularized classifiers
Journal of Complexity
Fast rates for support vector machines
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Structural risk minimization over data-dependent hierarchies
IEEE Transactions on Information Theory
Rademacher penalties and structural risk minimization
IEEE Transactions on Information Theory
Capacity of reproducing kernel spaces in learning theory
IEEE Transactions on Information Theory
Parzen windows for multi-class classification
Journal of Complexity
Learning with sample dependent hypothesis spaces
Computers & Mathematics with Applications
Classification with Gaussians and Convex Loss
The Journal of Machine Learning Research
Analysis of the distance between two classes for tuning SVM hyperparameters
IEEE Transactions on Neural Networks
Learning Translation Invariant Kernels for Classification
The Journal of Machine Learning Research
Rademacher chaos complexities for learning the kernel problem
Neural Computation
Learning the coordinate gradients
Advances in Computational Mathematics
Conditional quantiles with varying Gaussians
Advances in Computational Mathematics
Hi-index | 0.00 |
Gaussian kernels with flexible variances provide a rich family of Mercer kernels for learning algorithms. We show that the union of the unit balls of reproducing kernel Hilbert spaces generated by Gaussian kernels with flexible variances is a uniform Glivenko-Cantelli (uGC) class. This result confirms a conjecture concerning learnability of Gaussian kernels and verifies the uniform convergence of many learning algorithms involving Gaussians with changing variances. Rademacher averages and empirical covering numbers are used to estimate sample errors of multi-kernel regularization schemes associated with general loss functions. It is then shown that the regularization error associated with the least square loss and the Gaussian kernels can be greatly improved when flexible variances are allowed. Finally, for regularization schemes generated by Gaussian kernels with flexible variances we present explicit learning rates for regression with least square loss and classification with hinge loss.