More efficiency in multiple kernel learning

  • Authors:
  • Alain Rakotomamonjy;Francis Bach;Stéphane Canu;Yves Grandvalet

  • Affiliations:
  • Université de Rouen, Saint Etienne du Rouvray, France;Ecole des Mines de Paris, Fontainebleau, France;INSA de Rouen, Saint Etienne du Rouvray, France;IDIAP, Martigny, Switzerland

  • Venue:
  • Proceedings of the 24th international conference on Machine learning
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

An efficient and general multiple kernel learning (MKL) algorithm has been recently proposed by Sonnenburg et al. (2006). This approach has opened new perspectives since it makes the MKL approach tractable for large-scale problems, by iteratively using existing support vector machine code. However, it turns out that this iterative algorithm needs several iterations before converging towards a reasonable solution. In this paper, we address the MKL problem through an adaptive 2-norm regularization formulation. Weights on each kernel matrix are included in the standard SVM empirical risk minimization problem with a l1 constraint to encourage sparsity. We propose an algorithm for solving this problem and provide an new insight on MKL algorithms based on block 1-norm regularization by showing that the two approaches are equivalent. Experimental results show that the resulting algorithm converges rapidly and its efficiency compares favorably to other MKL algorithms.