Multi-task low-rank metric learning based on common subspace

  • Authors:
  • Peipei Yang;Kaizhu Huang;Cheng-Lin Liu

  • Affiliations:
  • National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China;National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China;National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China

  • Venue:
  • ICONIP'11 Proceedings of the 18th international conference on Neural Information Processing - Volume Part II
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multi-task learning, referring to the joint training of multiple problems, can usually lead to better performance by exploiting the shared information across all the problems. On the other hand, metric learning, an important research topic, is however often studied in the traditional single task setting. Targeting this problem, in this paper, we propose a novel multi-task metric learning framework. Based on the assumption that the discriminative information across all the tasks can be retained in a low-dimensional common subspace, our proposed framework can be readily used to extend many current metric learning approaches for the multi-task scenario. In particular, we apply our framework on a popular metric learning method called Large Margin Component Analysis (LMCA) and yield a new model called multi-task LMCA (mtLMCA). In addition to learning an appropriate metric, this model optimizes directly on the transformation matrix and demonstrates surprisingly good performance compared to many competitive approaches. One appealing feature of the proposed mtLMCA is that we can learn a metric of low rank, which proves effective in suppressing noise and hence more resistant to over-fitting. A series of experiments demonstrate the superiority of our proposed framework against four other comparison algorithms on both synthetic and real data.