Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization
ACM Transactions on Mathematical Software (TOMS)
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Multiple kernel learning, conic duality, and the SMO algorithm
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Bioinformatics
Training a Support Vector Machine in the Primal
Neural Computation
Large Scale Multiple Kernel Learning
The Journal of Machine Learning Research
Value Regularization and Fenchel Duality
The Journal of Machine Learning Research
Toward a gold standard for promoter prediction evaluation
Bioinformatics
Variable Sparsity Kernel Learning
The Journal of Machine Learning Research
An introduction to kernel-based learning algorithms
IEEE Transactions on Neural Networks
lp-Norm Multiple Kernel Learning
The Journal of Machine Learning Research
Maximum Common Subgraph based locally weighted regression
Proceedings of the 27th Annual ACM Symposium on Applied Computing
Model selection based product kernel learning for regression on graphs
Proceedings of the 28th Annual ACM Symposium on Applied Computing
Hi-index | 0.00 |
Recent research on multiple kernel learning has lead to a number of approaches for combining kernels in regularized risk minimization. The proposed approaches include different formulations of objectives and varying regularization strategies. In this paper we present a unifying optimization criterion for multiple kernel learning and show how existing formulations are subsumed as special cases. We also derive the criterion's dual representation, which is suitable for general smooth optimization algorithms. Finally, we evaluate multiple kernel learning in this framework analytically using a Rademacher complexity bound on the generalization error and empirically in a set of experiments.