Co-reranking by mutual reinforcement for image search

  • Authors:
  • Ting Yao;Tao Mei;Chong-Wah Ngo

  • Affiliations:
  • University of Science and Technology of China, Hefei, P.R. China;Microsoft Research Asia, Beijing, P.R. China;City University of Hong Kong, Kowloon, Hong Kong

  • Venue:
  • Proceedings of the ACM International Conference on Image and Video Retrieval
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Most existing reranking approaches to image search focus solely on mining "visual" cues within the initial search results. However, the visual information cannot always provide enough guidance to the reranking process. For example, different images with similar appearance may not always present the same relevant information to the query. Observing that multi-modality cues carry complementary relevant information, we propose the idea of co-reranking for image search, by jointly exploring the visual and textual information. Co-reranking couples two random walks, while reinforcing the mutual exchange and propagation of information relevancy across different modalities. The mutual reinforcement is iteratively updated to constrain information exchange during random walk. As a result, the visual and textual reranking can take advantage of more reliable information from each other after every iteration. Experiment results on a real-world dataset (MSRA-MM) collected from Bing image search engine shows that co-reranking outperforms several existing approaches which do not or weakly consider multi-modality interaction.