Coherent phrase model for efficient image near-duplicate retrieval

  • Authors:
  • Yiqun Hu;Xiangang Cheng;Liang-Tien Chia;Xing Xie;Deepu Rajan;Ah-Hwee Tan

  • Affiliations:
  • School of Computer Engineering, Nanyang Technological University, Singapore;School of Computer Engineering, Nanyang Technological University, Singapore;School of Computer Engineering, Nanyang Technological University, Singapore;Microsoft Research Asia, Beijing, China;School of Computer Engineering, Nanyang Technological University, Singapore;School of Computer Engineering, Nanyang Technological University, Singapore

  • Venue:
  • IEEE Transactions on Multimedia
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents an efficient and effective solution for retrieving image near-duplicate (IND) from image database. We introduce the coherent phrase model which incorporates the coherency of local regions to reduce the quantization error of the bag-of-words (BoW) model. In this model, local regions are characterized by visual phrase of multiple descriptors instead of visual word of single descriptor. We propose two types of visual phrase to encode the coherency in feature and spatial domain, respectively. The proposed model reduces the number of false matches by using this coherency and generates sparse representations of images. Compared to other method, the local coherencies among multiple descriptors of every region improve the performance and preserve the efficiency for IND retrieval. The proposed method is evaluated on several benchmark datasets for IND retrieval. Compared to the state-of-the-art methods, our proposed model has been shown to significantly improve the accuracy of IND retrieval while maintaining the efficiency of the standard bag-of-words model. The proposed method can be integrated with other extensions of BoW.