Scenique: a multimodal image retrieval interface

  • Authors:
  • Ilaria Bartolini;Paolo Ciaccia

  • Affiliations:
  • University of Bologna, Italy;University of Bologna, Italy

  • Venue:
  • AVI '08 Proceedings of the working conference on Advanced visual interfaces
  • Year:
  • 2008
  • Multi-structural databases

    Proceedings of the twenty-fourth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems

Quantified Score

Hi-index 0.00

Visualization

Abstract

Searching for images by using low-level visual features, such as color and texture, is known to be a powerful, yet imprecise, retrieval paradigm. The same is true if search relies only on keywords (or tags), either derived from the image context or user-provided annotations. In this demo we present Scenique, a multimodal image retrieval system that provides the user with two basic facilities: 1) an image annotator, that is able to predict keywords for new (i.e., unlabelled) images, and 2) an integrated query facility that allows the user to search for images using both visual features and tags, possibly organized in semantic dimensions. We demonstrate the accuracy of image annotation and the improved precision that Scenique obtains with respect to querying with either only features or keywords.