Transfer Metric Learning with Semi-Supervised Extension

  • Authors:
  • Yu Zhang;Dit-Yan Yeung

  • Affiliations:
  • Hong Kong University of Science and Technology;Hong Kong University of Science and Technology

  • Venue:
  • ACM Transactions on Intelligent Systems and Technology (TIST)
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Distance metric learning plays a very crucial role in many data mining algorithms because the performance of an algorithm relies heavily on choosing a good metric. However, the labeled data available in many applications is scarce, and hence the metrics learned are often unsatisfactory. In this article, we consider a transfer-learning setting in which some related source tasks with labeled data are available to help the learning of the target task. We first propose a convex formulation for multitask metric learning by modeling the task relationships in the form of a task covariance matrix. Then we regard transfer learning as a special case of multitask learning and adapt the formulation of multitask metric learning to the transfer-learning setting for our method, called transfer metric learning (TML). In TML, we learn the metric and the task covariances between the source tasks and the target task under a unified convex formulation. To solve the convex optimization problem, we use an alternating method in which each subproblem has an efficient solution. Moreover, in many applications, some unlabeled data is also available in the target task, and so we propose a semi-supervised extension of TML called STML to further improve the generalization performance by exploiting the unlabeled data based on the manifold assumption. Experimental results on some commonly used transfer-learning applications demonstrate the effectiveness of our method.