Locally non-negative linear structure learning for interactive image retrieval

  • Authors:
  • Lei Bao;Juan Cao;Tian Xia;Yong-Dong Zhang;Jintao Li

  • Affiliations:
  • Laboratory for Advanced Computing Technology Research, Institute of Computing Technology, Chinese Academy of Science, Beijing, China;Laboratory for Advanced Computing Technology Research, Institute of Computing Technology, Chinese Academy of Science, Beijing, China;Laboratory for Advanced Computing Technology Research, Institute of Computing Technology, Chinese Academy of Science, Beijing, China;Laboratory for Advanced Computing Technology Research, Institute of Computing Technology, Chinese Academy of Science, Beijing, China;Laboratory for Advanced Computing Technology Research, Institute of Computing Technology, Chinese Academy of Science, Beijing, China

  • Venue:
  • MM '09 Proceedings of the 17th ACM international conference on Multimedia
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

A successful interactive image retrieval system is expected to quickly return as many relevant results as possible while costing less users' effort. Considering these system demands, firstly we propose a novel semi-supervised learning algorithm called Locally Non-negative Linear Structure Learning (LNLS), which is based on the assumption that the labels of each data should be sufficiently smooth with respect to the locally non-negative linear structure of dataset. It has two main merits: first, it is robust to the small sample learning problem since it learns structure from both labeled and unlabeled data; second, by emphasizing the non-negativity of locally linear structure, this algorithm preserves the non-negative inherent characteristic of image data and can truly reveal the intrinsic structure of the images corpus, especially the asymmetric relationship between images. Meanwhile, we explore an online updating algorithm for LNLS to tackle the large computation cost. Thus the model can be generalized to the new queries or the newly-labeled samples without retraining. Furthermore, an active learning method for LNLS is proposed to make the most of users' effort to improve the learner. The encouraging experimental results demonstrate the effectiveness and efficiency of our proposed methods.