Face Image Retrieval and Annotation Based on Two Latent Semantic Spaces in FIARS

  • Authors:
  • Hideaki Ito;Hiroyasu Koshimizu

  • Affiliations:
  • Chukyo University, Japan;Chukyo University, Japan

  • Venue:
  • ISM '06 Proceedings of the Eighth IEEE International Symposium on Multimedia
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes face image retrieval and annotation based on latent semantic indexing in FIARS. Two latent semantic spaces are constructed from visual and symbolic features to develop these mechanisms. These features are corresponding to lengths of some places of a face and its parts, and keywords, respectively. One latent semantic space is constructed from visual features, the other space is constructed from both features. The former space is used for retrieving similar face images to a given face image, and the latter is used for seeking keywords to the given face image. Moreover, two types of visual futures are defined. One is specified in terms of the lengths of face parts, and the other in terms of points on the outlines of a face and its parts. As an experiment, recall and precision ratios of retrieved face images are measured from the viewpoint of whether similar face images are retrieved, and the ratios of retrieved keywords are measured using two types of the visual features from the viewpoint of whether the retrieved keywords are suitable for the given face image. To evaluate efficiency, not only face images which are stored in the database of the system, but also new face images are given as queries.