Learning attribute-aware dictionary for image classification and search

  • Authors:
  • Junjie Cai;Zheng-Jun Zha;Huanbo Luan;Shiliang Zhang;Qi Tian

  • Affiliations:
  • University of Texas at San Antonio, San Antonio, TX, USA;National University of Singapore, Singapore, Singapore;National University of Singapore, Singapore, Singapore;University of Texas at San Antonio, San Antonio, TX, USA;University of Texas at San Antonio, San Antonio, TX, USA

  • Venue:
  • Proceedings of the 3rd ACM conference on International conference on multimedia retrieval
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Bag-of-visual words (BoW) model has recently been well advocated for image classification and search. However, one critical limitation of existing BoW model is the lack of semantic information. To alleviate the impact of this issue, it is imperative to construct semantic-aware visual dictionary. In this paper, we propose a novel approach for learning visual word dictionary embedding intermediate-level semantics. Specifically, we first introduce an Attribute aware Dictionary Learning(AttrDL) scheme to learn multiple sub-dictionaries with specific semantic meanings. We divide training images into different sets and each represents a specific attribute. For each image set, an attribute-aware sub-vocabulary is learned. Hence, these resulting sub-vocabularies are more discriminative for semantics than the traditional vocabularies. Second, to get semantic-aware and discriminative BoW representation with the learned sub-vocabularies, we adopt the idea of L21-norm regularized sparse coding and recode the resulting sparse representation of each image. Experimental results show that the proposed scheme outperforms the state-of-the-art algorithms in both image classification and search tasks.