Discriminant kernel and regularization parameter learning via semidefinite programming
Proceedings of the 24th international conference on Machine learning
Multi-class Discriminant Kernel Learning via Convex Programming
The Journal of Machine Learning Research
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Building sparse multiple-kernel SVM classifiers
IEEE Transactions on Neural Networks
Gaussian Processes for Object Categorization
International Journal of Computer Vision
Employing multiple-kernel support vector machines for counterfeit banknote recognition
Applied Soft Computing
A multiple-kernel support vector regression approach for stock market price forecasting
Expert Systems with Applications: An International Journal
Learning with uncertain kernel matrix set
Journal of Computer Science and Technology
Expert Systems with Applications: An International Journal
Multiple Kernel Learning Algorithms
The Journal of Machine Learning Research
Generalized augmentation of multiple kernels
MCS'11 Proceedings of the 10th international conference on Multiple classifier systems
SPF-GMKL: generalized multiple kernel learning with a million kernels
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
An efficient multiple-kernel learning for pattern classification
Expert Systems with Applications: An International Journal
Hi-index | 0.01 |
The kernel function plays a central role in kernel methods. Most existing methods can only adapt the kernel parameters or the kernel matrix based on empirical data. Recently, Ong et al. introduced the method of hyperkernels which can be used to learn the kernel function directly in an inductive setting. However, the associated optimization problem is a semidefinite program (SDP), which is very computationally expensive, even with the recent advances in interior point methods. In this paper, we show that this learning problem can be equivalently reformulated as a second-order cone program (SOCP), which can then be solved more efficiently than SDPs. Comparison is also made with the kernel matrix learning method proposed by Lanckriet et al. Experimental results on both classification and regression problems, with toy and real-world data sets, show that our proposed SOCP formulation has significant speedup over the original SDP formulation. Moreover, it yields better generalization than Lanckriet et al.'s method, with a speed that is comparable, or sometimes even faster, than their quadratically constrained quadratic program (QCQP) formulation.