Low rank metric learning for social image retrieval

  • Authors:
  • Zechao Li;Jing Liu;Yu Jiang;Jinhui Tang;Hanqing Lu

  • Affiliations:
  • Chinese Academy of Sciences, Beijing, China;Chinese Academy of Sciences, Beijing, China;Chinese Academy of Sciences, Beijing, China;Nanjing University of Science and Technology, Naijing, China;Chinese Academy of Sciences, Beijing, China

  • Venue:
  • Proceedings of the 20th ACM international conference on Multimedia
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

With the popularity of social media applications, large amounts of social images associated with rich context are available, which is helpful for many applications. In this paper, we propose a Low Rank distance Metric Learning (LRML) algorithm by discovering knowledge from these rich contextual data, to boost the performance of CBIR. Different from traditional approaches that often use the must-links and cannot-links between images, the proposed method exploits information from the visual and textual domains. We assume that the visual similarity estimated by the learned metric is expected to be consistent with the semantic similarity in the textual domain. Since tags are usually noisy, misspelling or meaningless, we also leverage the preservation of visual structure to prevent overfitting those noisy tags. On the other hand, the metric is straightforward constrained to be low rank. We formulate it as a convex optimization problem with nuclear norm minimization and propose an effective optimization algorithm based on proximal gradient method. With the learned metric for image retrieval, some experimental evaluations on a real-world dataset demonstrate the outperformance of our approach over other related work.