Coupled information-theoretic encoding for face photo-sketch recognition

  • Authors:
  • Wei Zhang; Xiaogang Wang; Xiaoou Tang

  • Affiliations:
  • Dept. of Inf. Eng., Chinese Univ. of Hong Kong, Hong Kong, China;Dept. of Electron. Eng., Chinese Univ. of Hong Kong, Hong Kong, China;Dept. of Inf. Eng., Chinese Univ. of Hong Kong, Hong Kong, China

  • Venue:
  • CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automatic face photo-sketch recognition has important applications for law enforcement. Recent research has focused on transforming photos and sketches into the same modality for matching or developing advanced classification algorithms to reduce the modality gap between features extracted from photos and sketches. In this paper, we propose a new inter-modality face recognition approach by reducing the modality gap at the feature extraction stage. A new face descriptor based on coupled information-theoretic encoding is used to capture discriminative local face structures and to effectively match photos and sketches. Guided by maximizing the mutual information between photos and sketches in the quantized feature spaces, the coupled encoding is achieved by the proposed coupled information-theoretic projection tree, which is extended to the randomized forest to further boost the performance. We create the largest face sketch database including sketches of 1, 194 people from the FERET database. Experiments on this large scale dataset show that our approach significantly outperforms the state-of-the-art methods.