Lectures on modern convex optimization: analysis, algorithms, and engineering applications
Lectures on modern convex optimization: analysis, algorithms, and engineering applications
Convergence of a block coordinate descent method for nondifferentiable minimization
Journal of Optimization Theory and Applications
The Ordered Subsets Mirror Descent Optimization Method with Applications to Tomography
SIAM Journal on Optimization
Convex Optimization
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Multiple kernel learning, conic duality, and the SMO algorithm
ICML '04 Proceedings of the twenty-first international conference on Machine learning
CVPRW '04 Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 12 - Volume 12
Shape Matching and Object Recognition Using Low Distortion Correspondences
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Learning the Kernel Function via Regularization
The Journal of Machine Learning Research
Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing)
Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing)
Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
SVM-KNN: Discriminative Nearest Neighbor Classification for Visual Category Recognition
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
A Visual Vocabulary for Flower Classification
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Large Scale Multiple Kernel Learning
The Journal of Machine Learning Research
Proceedings of the 25th international conference on Machine learning
Automated Flower Classification over a Large Number of Classes
ICVGIP '08 Proceedings of the 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing
An efficient projection for l1, ∞ regularization
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Smooth Optimization with Approximate Gradient
SIAM Journal on Optimization
Mirror descent and nonlinear projected subgradient methods for convex optimization
Operations Research Letters
A unifying view of multiple kernel learning
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part II
lp-Norm Multiple Kernel Learning
The Journal of Machine Learning Research
Optimization with Sparsity-Inducing Penalties
Foundations and Trends® in Machine Learning
SPF-GMKL: generalized multiple kernel learning with a million kernels
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
On the convergence rate of lp-norm multiple kernel learning
The Journal of Machine Learning Research
Training Lp norm multiple kernel learning in the primal
Neural Networks
Hi-index | 0.00 |
This paper presents novel algorithms and applications for a particular class of mixed-norm regularization based Multiple Kernel Learning (MKL) formulations. The formulations assume that the given kernels are grouped and employ l1 norm regularization for promoting sparsity within RKHS norms of each group and ls, s≥2 norm regularization for promoting non-sparse combinations across groups. Various sparsity levels in combining the kernels can be achieved by varying the grouping of kernels---hence we name the formulations as Variable Sparsity Kernel Learning (VSKL) formulations. While previous attempts have a non-convex formulation, here we present a convex formulation which admits efficient Mirror-Descent (MD) based solving techniques. The proposed MD based algorithm optimizes over product of simplices and has a computational complexity of O(m2ntot log nmax/ε2) where m is no. training data points, nmax,ntot are the maximum no. kernels in any group, total no. kernels respectively and ε is the error in approximating the objective. A detailed proof of convergence of the algorithm is also presented. Experimental results show that the VSKL formulations are well-suited for multi-modal learning tasks like object categorization. Results also show that the MD based algorithm outperforms state-of-the-art MKL solvers in terms of computational efficiency.