A relevant image search engine with late fusion: mixing the roles of textual and visual descriptors

  • Authors:
  • Franco M. Segarra;Luis A. Leiva;Roberto Paredes

  • Affiliations:
  • Universidad Politécnica de Valencia, Valencia, Spain;Universidad Politécnica de Valencia, Valencia, Spain;Universidad Politécnica de Valencia, Valencia, Spain

  • Venue:
  • Proceedings of the 16th international conference on Intelligent user interfaces
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

A fundamental problem in image retrieval is how to improve the text-based retrieval systems, which is known as "bridging the semantic gap". The reliance on visual similarity for judging semantic similarity may be problematic due to the semantic gap between low-level content and higher-level concepts. One way to overcome this problem and increase thus retrieval performance is to consider user feedback in an interactive scenario. In our approach, a user starts a query and is then presented with a set of (hopefully) relevant images; selecting from these images those which are more relevant to her. Then the system refines its results after each iteration, using late fusion methods, and allowing the user to dynamically tune the amount of textual and visual information that will be used to retrieve similar images. We describe how does our approach fit in a real-world setting, discussing also an evaluation of results.