Object identification and retrieval from efficient image matching. Snap2Tell with the STOIC dataset

  • Authors:
  • Jean-Pierre Chevallet;Joo-Hwee Lim;Mun-Kew Leong

  • Affiliations:
  • IPAL-Centre National de la Recherche Scientifique (CNRS) Laboratory, UMI, France and Institute for Infocomm Research (I2 R), Singapore, Singapore;Institute for Infocomm Research (I2 R), Singapore, Singapore;Institute for Infocomm Research (I2 R), Singapore, Singapore

  • Venue:
  • Information Processing and Management: an International Journal - Special issue: AIRS2005: Information retrieval research in Asia
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Traditional content based image retrieval attempts to retrieve images using syntactic features for a query image. Annotated image banks and Google allow the use of text to retrieve images. In this paper, we studied the task of using the content of an image to retrieve information in general. We describe the significance of object identification in an information retrieval paradigm that uses image set as intermediate means in indexing and matching. We also describe a unique Singapore Tourist Object Identification Collection with associated queries and relevance judgments for evaluating the new task and the need for efficient image matching using simple image features. We present comprehensive experimental evaluation on the effects of feature dimensions, context, spatial weightings, coverage of image indexes, and query devices on task performance. Lastly we describe the current system developed to support mobile image-based tourist information retrieval.