Unsupervised Face Annotation by Mining the Web

  • Authors:
  • Duy-Dinh Le;Shin'ichi Satoh

  • Affiliations:
  • -;-

  • Venue:
  • ICDM '08 Proceedings of the 2008 Eighth IEEE International Conference on Data Mining
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Searching for images of people is an essential task for image and video search engines. However, current search engines have limited capabilities for this task since they rely on text associated with images and video, and such text is likely to return many irrelevant results. We propose a method for retrieving relevant faces of one person by learning the visual consistency among results retrieved from text correlation-based search engines. The method consists of two steps. In the first step, each candidate face obtained from a text-based search engine is ranked with a score that measures the distribution of visual similarities among the faces. Faces that are possibly very relevant or irrelevant are ranked at the top or bottom of the list, respectively. The second step improves this ranking by treating this problem as a classification problem in which input faces are classified as ’person-X’ or ’non-person-X’; and the faces are re-ranked according to their relevant score inferred from the classifier’s probability output. To train this classifier, we use a bagging-based framework to combine results from multiple weak classifiers trained using different subsets. These training subsets are extracted and labeled automatically from the rank list produced from the classifier trained from the previous step. In this way, the accuracy of the ranked list increases after a number of iterations. Experimental results on various face sets retrieved from captions of news photos show that the retrieval performance improved after each iteration, with the final performance being higher than those of the existing algorithms.