Multi-order visual phrase for scalable image search

  • Authors:
  • Shiliang Zhang;Qi Tian;Qingming Huang;Wen Gao;Yong Rui

  • Affiliations:
  • Univ. of Texas at San Antonio, San Antonio, TX;Univ. of Texas at San Antonio, San Antonio, TX;University of Chinese Academy of Sciences, Beijing, China;Peking University, Beijing, China;Microsoft Research Asia, Beijing, China

  • Venue:
  • Proceedings of the Fifth International Conference on Internet Multimedia Computing and Service
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Visual phrase captures extra spatial clues among single visual words, thus shows better discriminative power than single visual word in image retrieval. Not withstanding their success, existing visual phrases still show obvious shortcomings: 1) limited flexibility, i.e., visual phrases are considered for matching only if they contain the same number of visual words; 2) larger quantization error and low repeatability, i.e., quantization errors in visual words are aggregated in visual phrases, making them harder to be matched than single visual words. To avoid these issues, we propose multi-order visual phrase which contains two complementary clues: center visual word quantized from the local descriptor of each image keypoint and the visual and spatial clues of multiple nearby keypoints. Two multi-order visual phrases are flexibly matched by first comparing their center visual words, then estimating a match confidence by checking the spatial and visual consistency of their neighbor keypoints. Therefore, multi-order visual phrase does not scarify the repeatability of single visual word and is more robust to quantization error than existing visual phrases. We test multi-order visual phrase on UKbench, Oxford5K, and 1 million distractor images collected from Flickr. Comparisons with recent retrieval approaches clearly demonstrate the competitive accuracy and significantly better efficiency of multi-order visual phrase.