Original Contribution: Stacked generalization
Neural Networks
Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns
IEEE Transactions on Pattern Analysis and Machine Intelligence
On the algorithmic implementation of multiclass kernel-based vector machines
The Journal of Machine Learning Research
Convex Optimization
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Support vector machine learning for interdependent and structured output spaces
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Multiple kernel learning, conic duality, and the SMO algorithm
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Learning the Kernel Function via Regularization
The Journal of Machine Learning Research
One-Shot Learning of Object Categories
IEEE Transactions on Pattern Analysis and Machine Intelligence
Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
A Visual Vocabulary for Flower Classification
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
QP Algorithms with Guaranteed Accuracy and Run Time for Support Vector Machines
The Journal of Machine Learning Research
Large Scale Multiple Kernel Learning
The Journal of Machine Learning Research
Pegasos: Primal Estimated sub-GrAdient SOlver for SVM
Proceedings of the 24th international conference on Machine learning
Multiclass multiple kernel learning
Proceedings of the 24th international conference on Machine learning
Representing shape with a spatial pyramid kernel
Proceedings of the 6th ACM international conference on Image and video retrieval
SVM optimization: inverse dependence on training set size
Proceedings of the 25th international conference on Machine learning
Proximal regularization for online and batch learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
More generality in efficient multiple kernel learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Image and Vision Computing
Cost-conscious multiple kernel learning
Pattern Recognition Letters
Online multiple kernel learning: algorithms and mistake bounds
ALT'10 Proceedings of the 21st international conference on Algorithmic learning theory
Cue integration through discriminative accumulation
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Learning interpretable SVMs for biological sequence classification
RECOMB'05 Proceedings of the 9th Annual international conference on Research in Computational Molecular Biology
An online framework for learning novel concepts over multiple cues
ACCV'09 Proceedings of the 9th Asian conference on Computer Vision - Volume Part I
Mirror descent and nonlinear projected subgradient methods for convex optimization
Operations Research Letters
lp-norm multikernel learning approach for stock market price forecasting
Computational Intelligence and Neuroscience
Hi-index | 0.00 |
In recent years there has been a lot of interest in designing principled classification algorithms over multiple cues, based on the intuitive notion that using more features should lead to better performance. In the domain of kernel methods, a principled way to use multiple features is the Multi Kernel Learning (MKL) approach. Here we present a MKL optimization algorithm based on stochastic gradient descent that has a guaranteed convergence rate. We directly solve the MKL problem in the primal formulation. By having a p-norm formulation of MKL, we introduce a parameter that controls the level of sparsity of the solution, while leading to an easier optimization problem. We prove theoretically and experimentally that 1) our algorithm has a faster convergence rate as the number of kernels grows; 2) the training complexity is linear in the number of training examples; 3) very few iterations are sufficient to reach good solutions. Experiments on standard benchmark databases support our claims.