Detecting and removing specularities in facial images

  • Authors:
  • Martin D. Levine;Jisnu Bhattacharyya

  • Affiliations:
  • Department of Electrical and Computer Engineering, Center For Intelligent Machines, McGill University, 3480 University Street, Montreal, Que., Canada H3A 2A7;Department of Electrical and Computer Engineering, Center For Intelligent Machines, McGill University, 3480 University Street, Montreal, Que., Canada H3A 2A7

  • Venue:
  • Computer Vision and Image Understanding
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Specularities often confound algorithms designed to solve computer vision tasks such as image segmentation, object detection, and tracking. These tasks usually require color image segmentation to partition an image into regions, where each region corresponds to a particular material. Due to discontinuities resulting from shadows and specularities, a single material is often segmented into several sub-regions. In this paper, a specularity detection and removal technique is proposed that requires no camera calibration or other a priori information regarding the scene. The approach specifically addresses detecting and removing specularities in facial images. The image is first processed by the Luminance Multi-Scale Retinex [B.V. Funt, K. Barnard, M. Brockington, V. Cardei, Luminance-Based Multi-Scale Retinex, AIC'97, Kyoto, Japan, May 1997]. Second, potential specularities are detected and a wavefront is generated outwards from the peak of the specularity to its boundary or until a material boundary has been reached. Upon attaining the specularity boundary, the wavefront contracts inwards while coloring in the specularity until the latter no longer exists. The third step is discussed in a companion paper [M.D. Levine, J. Bhattacharyya, Removing shadows, Pattern Recognition Letters, 26 (2005) 251-265] where a method for detecting and removing shadows has also been introduced. The approach involves training Support Vector Machines to identify shadow boundaries based on their boundary properties. The latter are used to identify shadowed regions in the image and then assign to them the color of non-shadow neighbors of the same material as the shadow. Based on these three steps, we show that more meaningful color image segmentations can be achieved by compensating for illumination using the Illumination Compensation Method proposed in this paper. It is also demonstrated that the accuracy of facial skin detection improves significantly when this illumination compensation approach is used. Finally, we show how illumination compensation can increase the accuracy of face recognition. ition.