Multimodal retrieval with relevance feedback based on genetic programming

  • Authors:
  • Rodrigo Tripodi Calumby;Ricardo Silva Torres;Marcos André Gonçalves

  • Affiliations:
  • Department of Exact Sciences, University of Feira de Santana, Feira de Santana, Brazil and RECOD Lab, Institute of Computing, University of Campinas, Campinas, Brazil;RECOD Lab, Institute of Computing, University of Campinas, Campinas, Brazil;Department of Computer Science, Federal University of Minas Gerais, Belo Horizonte, Brazil

  • Venue:
  • Multimedia Tools and Applications
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a framework for multimodal retrieval with relevance feedback based on genetic programming. In this supervised learning-to-rank framework, genetic programming is used for the discovery of effective combination functions of (multimodal) similarity measures using the information obtained throughout the user relevance feedback iterations. With these new functions, several similarity measures, including those extracted from different modalities (e.g., text, and content), are combined into one single measure that properly encodes the user preferences. This framework was instantiated for multimodal image retrieval using visual and textual features and was validated using two image collections, one from the Washington University and another from the ImageCLEF Photographic Retrieval Task. For this image retrieval instance several multimodal relevance feedback techniques were implemented and evaluated. The proposed approach has produced statistically significant better results for multimodal retrieval over single modality approaches and superior effectiveness when compared to the best submissions of the ImageCLEF Photographic Retrieval Task 2008.