Multiple kernel learning improved by MMD

  • Authors:
  • Jiangtao Ren;Zhou Liang;Shaofeng Hu

  • Affiliations:
  • School of Software, Sun Yat-sen University, China;School of Software, Sun Yat-sen University, China;Department of Computer Science, Sun Yat-sen University, China

  • Venue:
  • ADMA'10 Proceedings of the 6th international conference on Advanced data mining and applications - Volume Part II
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

When training and testing data are drawn from different distributions, the performance of the classification model will be low. Such a problem usually comes from sample selection bias or transfer learning scenarios. In this paper, we propose a novel multiple kernel learning framework improved by Maximum Mean Discrepancy (MMD) to solve the problem. This new model not only utilizes the capacity of kernel learning to construct a nonlinear hyperplane which maximizes the separation margin, but also reduces the distribution discrepancy between training and testing data simultaneously, which is measured by MMD. This approach is formulated as a bi-objective optimization problem. Then an efficient optimization algorithm based on gradient descent and quadratic programming [13] is adopted to solve it. Extensive experiments on UCI and text datasets show that the proposed model outperforms traditional multiple kernel learning model in sample selection bias and transfer learning scenarios.