Experiments in mental face retrieval

  • Authors:
  • Yuchun Fang;Donald Geman

  • Affiliations:
  • IMEDIA Project, INRIA Rocquencourt;IMEDIA Project, INRIA Rocquencourt

  • Venue:
  • AVBPA'05 Proceedings of the 5th international conference on Audio- and Video-Based Biometric Person Authentication
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We propose a relevance feedback system for retrieving a mental face picture from a large image database. This scenario differs from standard image retrieval since the target image exists only in the mind of the user, who responds to a sequence of machine-generated queries designed to display the person in mind as quickly as possible. At each iteration the user declares which of several displayed faces is “closest” to his target. The central limiting factor is the “semantic gap” between the standard intensity-based features which index the images in the database and the higher-level representation in the mind of the user which drives his answers. We explore a Bayesian, information-theoretic framework for choosing which images to display and for modeling the response of the user. The challenge is to account for psycho-visual factors and sources of variability in human decision-making. We present experiments with real users which illustrate and validate the proposed algorithms.