A web-based evaluation system for CBIR

  • Authors:
  • Henning Müller;Wolfgang Müller;David Squire

  • Affiliations:
  • Univ. of Geneva, Geneva, Switzerland;Univ. of Geneva, Geneva, Switzerland;Monash University, Melbourne, Australia

  • Venue:
  • MULTIMEDIA '01 Proceedings of the 2001 ACM workshops on Multimedia: multimedia information retrieval
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

This papers describes a benchmark test for content-based image retrieval systems (CBIRSs) with the query by example (QBE) query paradigm. This benchmark is accessible via the Internet and thus allows to evaluate any CBIRS which is compliant with Multimedia Retrieval Markup Language (MRML) for query formulation and result transmission. Thus it allows a quick and easy comparison between different features and algorithms for CBIRSs. The benchmark is based on a standardized communication protocol to do the communication between benchmark server and benchmarked system and it uses a freely downloadable image database (DB) to make the results reproducible. A CBIR system that uses MRML and other components to develop MRML-based applications can be downloaded free of charge as well. The evaluation is based on several queries and known relevance sets for these queries. Several answer sets for one query image are possible if judgments of several users exist, thus almost any sort of judgment can be incorporated into the system. The final results are averaged over all queries. The evaluation of several steps of relevance feedback (RF) based on the collected relevance judgments is also included into the benchmark. The performance of RF is often regarded to be even more important than the performance in the first query step because only with RF the adaptation of the system to the users subjective goal can be measured. For the evaluation of a system with RF, the same evaluation measures are used as for the first query step.