A unified context model for web image retrieval

  • Authors:
  • Linjun Yang;Bo Geng;Alan Hanjalic;Xian-Sheng Hua

  • Affiliations:
  • Microsoft Research Asia, Beijing, China;Peking University, Beijing, China;Delft University of Technology, Netherlands;Microsoft Research Asia, Beijing, China

  • Venue:
  • ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Content-based web image retrieval based on the query-by-example (QBE) principle remains a challenging problem due to the semantic gap as well as the gap between a user's intent and the representativeness of a typical image query. In this article, we propose to address this problem by integrating query-related contextual information into an advanced query model to improve the performance of QBE-based web image retrieval. We consider both the local and global context of the query image. The local context can be inferred from the web pages and the click-through log associated with the query image, while the global context is derived from the entire corpus comprising all web images and the associated web pages. To effectively incorporate the local query context we propose a language modeling based approach to deal with the combined structured query representation from the contextual and visual information. The global query context is integrated by the multi-modal relevance model to “reconstruct” the query from the document models indexed in the corpus. In this way, the global query context is employed to address the noise or missing information in the query and its local context, so that a comprehensive and robust query model can be obtained. We evaluated the proposed approach on a representative product image dataset collected from the web and demonstrated that the inclusion of the local and global query contexts significantly improves the performance of QBE-based web image retrieval.