Multi-platform image search using tag enrichment

  • Authors:
  • Jinming Min;Cristover Lopes;Johannes Leveling;Dag Schmidtke;Gareth J.F. Jones

  • Affiliations:
  • Dublin City University, Dublin, Ireland;Trinity College Dublin, Dublin, Ireland;Dublin City University, Dublin, Ireland;Microsoft, Dublin, Ireland;Dublin City University, Dublin, Ireland

  • Venue:
  • SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The number of images available online is growing steadily and current web search engines have indexed more than 10 billion images. Approaches to image retrieval are still often text-based and operate on image annotations and captions. Image annotations (i.e. image tags) are typically short, user-generated, and of varying quality, which increases the mismatch problem between query terms and image tags. For example, a user might enter the query "wedding dress" while all images are annotated with "bridal gown" or "wedding gown". This demonstration presents an image search system using reduction and expansion of image annotations to overcome vocabulary mismatch problems by enriching the sparse set of image tags. Our image search application accepts a written query as input and produces a ranked list of result images and annotations (i.e. image tags) as output. The system integrates methods to reduce and expand the image tag set, thus decreasing the effect of sparse image tags. It builds on different image collections such as the Wikipedia image collection (http://www.imageclef.org/wikidata) and the Microsoft Office.com ClipArt collection (http://office.microsoft.com/), but can be applied to social collections such as Flickr as well. Our demonstration system runs on PCs, tablets, and smartphones, making use of advanced user interface capabilities on mobile devices.