Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
The Journal of Machine Learning Research
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
On the performance of kernel classes
The Journal of Machine Learning Research
The Cauchy-Schwarz Master Class: An Introduction to the Art of Mathematical Inequalities
The Cauchy-Schwarz Master Class: An Introduction to the Art of Mathematical Inequalities
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Multiple kernel learning, conic duality, and the SMO algorithm
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Learning the Kernel Function via Regularization
The Journal of Machine Learning Research
Large Scale Multiple Kernel Learning
The Journal of Machine Learning Research
Consistency of the Group Lasso and Multiple Kernel Learning
The Journal of Machine Learning Research
Support Vector Machines
Invited talk: Can learning kernels help performance?
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
L2 regularization for learning kernels
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
WEKA---Experiences with a Java Open-Source Project
The Journal of Machine Learning Research
Variable Sparsity Kernel Learning
The Journal of Machine Learning Research
lp-Norm Multiple Kernel Learning
The Journal of Machine Learning Research
Learning bounds for support vector machines with learned kernels
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Rademacher penalties and structural risk minimization
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
An introduction to kernel-based learning algorithms
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
We derive an upper bound on the local Rademacher complexity of lp-norm multiple kernel learning, which yields a tighter excess risk bound than global approaches. Previous local approaches analyzed the case p = 1 only while our analysis covers all cases 1 ≤ p ≤ ∞, assuming the different feature mappings corresponding to the different kernels to be uncorrelated. We also show a lower bound that shows that the bound is tight, and derive consequences regarding excess loss, namely fast convergence rates of the order O(n-α/1+a), where α is the minimum eigenvalue decay rate of the individual kernels.