Multimodal concept-dependent active learning for image retrieval

  • Authors:
  • King-Shy Goh;Edward Y. Chang;Wei-Cheng Lai

  • Affiliations:
  • ECE at University of California and VIMA Technologies, Santa Barbara, CA;ECE at University of California and VIMA Technologies, Santa Barbara, CA;ECE at University of California and VIMA Technologies, Santa Barbara, CA

  • Venue:
  • Proceedings of the 12th annual ACM international conference on Multimedia
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

It has been established that active learning is effective for learning complex, subjective query concepts for image retrieval. However, active learning has been applied in a concept independent way, (i.e., the kernel-parameters and the sampling strategy are identically chosen) for learning query concepts of differing complexity. In this work, we first characterize a concept's complexity using three measures: hit-rate, isolation and diversity. We then propose a multimodal learning approach that uses images' semantic labels to guide a concept-dependent, active-learning process. Based on the complexity of a concept, we make intelligent adjustments to the sampling strategy and the sampling pool from which images are to be selected and labeled, to improve concept learnability. Our empirical study on a $300$K-image dataset shows that concept-dependent learning is highly effective for image-retrieval accuracy.