Semantic Visual Abstraction for Face Recognition

  • Authors:
  • Yang Cai;David Kaufer;Emily Hart;Elizabeth Solomon

  • Affiliations:
  • Ambient Intelligence Lab, Carnegie Mellon University,;English, Carnegie Mellon University,;Electrical and Computer Engineering, Carnegie Mellon University,;Art, Carnegie Mellon University,

  • Venue:
  • ICCS '09 Proceedings of the 9th International Conference on Computational Science: Part I
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In contrast to the one-dimensional structure of natural language, images consist of two- or three-dimensional structures. This contrast in dimensionality causes the mapping between words and images to be a challenging, poorly understood and undertheorized task. In this paper, we present a general theoretical framework for semantic visual abstraction in massive image databases. Our framework applies specifically to facial identification and visual search for such recognition. It accommodates the by now commonplace observation that, through a graph-based visual abstraction, language allows humans to categorize objects and to provide verbal annotations to shapes. Our theoretical framework assumes a hidden layer between facial features and the referencing of expressive words. This hidden layer contains key points of correspondence that can be articulated mathematically, visually or verbally. A semantic visual abstraction network is designed for efficient facial recognition in massive visual datasets. In this paper, we demonstrate how a two-way mapping of words and facial shapes is feasible in facial information retrieval and reconstruction.