Bayesian Visual Reranking

  • Authors:
  • Xinmie Tian; Linjun Yang; Jingdong Wang; Xiuqing Wu; Xian-Sheng Hua

  • Affiliations:
  • Dept. of Electron. Eng. & Inf. Sci., Univ. of Sci. & Technol. of China, Hefei, China;-;-;-;-

  • Venue:
  • IEEE Transactions on Multimedia
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Visual reranking has been proven effective to refine text-based video and image search results. It utilizes visual information to recover “true” ranking list from the noisy one generated by text-based search, by incorporating both textual and visual information. In this paper, we model the textual and visual information from the probabilistic perspective and formulate visual reranking as an optimization problem in the Bayesian framework, termed Bayesian visual reranking. In this method, the textual information is modeled as a likelihood, to reflect the disagreement between reranked results and text-based search results which is called ranking distance. The visual information is modeled as a conditional prior, to indicate the ranking score consistency among visually similar samples which is called visual consistency. Bayesian visual reranking derives the best reranking results by maximizing visual consistency while minimizing ranking distance. To model the ranking distance more precisely, we propose a novel pair-wise method which measure the ranking distance based on the disagreement in terms of pair-wise orders. For visual consistency, we study three different regularizers to mine the best way for its modeling. We conduct extensive experiments on both video and image search datasets. Experimental results demonstrate the effectiveness of our proposed Bayesian visual reranking.