Virtual face image generation for illumination and pose insensitive face recognition

  • Authors:
  • Wen Gao;Shiguang Shan;Xiujuan Chai;Xiaowei Fu

  • Affiliations:
  • Inst. of Comput. Technol., Chinese Acad. of Sci., Beijing, China;Inst. of Comput. Technol., Chinese Acad. of Sci., Beijing, China;Comput. & Software Res. Lab, Electron. & Telecommun. Res. Inst., Daejeon, South Korea;Comput. & Software Res. Lab, Electron. & Telecommun. Res. Inst., Daejeon, South Korea

  • Venue:
  • ICME '03 Proceedings of the 2003 International Conference on Multimedia and Expo - Volume 3 (ICME '03) - Volume 03
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Face recognition has attracted much attention in the past decades for its wide potential applications. Much progress has been made in the past few years. However, specialized evaluation of the state-of-the-art in both academic algorithms and commercial systems illustrates that the performance of most current recognition technologies degrades significantly due to the variations of illumination and/or pose. To solve these problems, providing multiple training samples to the recognition system is a rational choice. However, enough samples are not always available for many practical applications. It is an alternative to augment the training set by generating virtual views from one single face image, that is relighting the given face images of synthesize novel views of the given face. Based on this strategy, this paper presents some attempts by presenting a ratio-image based face relighting method and a face re-rotating approach based on linear shape prediction and image warp. To evaluate the effect of the additional virtual face images, primary experiments are conducted using our specific substance method as face recognition approach, which shows impressive improvement compared with standard benchmark face recognition methods.