Robust recovery of multiple light source based on local light source constant constraint

  • Authors:
  • Jie Wei

  • Affiliations:
  • Department of Computer Science, City College and Graduate Center, City University of New York, Convent avenue at 138th Street, New York, NY

  • Venue:
  • Pattern Recognition Letters
  • Year:
  • 2003

Quantified Score

Hi-index 0.10

Visualization

Abstract

In this paper we are concerned with the robust calibration of light sources in the scene from a known shape. The image of a 3-D object depends on the light source(s), its 3-D geometry, and its surface reflectance properties (Robot Vision, MIT Press, Cambridge, MA, 1986). In the last two decades in the computer vision research communities intensive researches have been conducted along the line of shape from shading (Shape from Shading, MIT Press, Cambridge, MA, 1989), where great efforts are made to recover the 3-D geometry with a priori knowledge regarding the illumination and surface reflectance properties. However, as pointed out by Sato et al. (Proceedings of CVPR'99, 1999, pp. 306), little progress has been made for the recovery of light source(s) with known shape and surface reflectance properties. In a recent paper (IEEE Trans PAMI 23 (2001) 915), Zhang and Yang achieved multiple illuminant direction recovery based on the critical points with impressive performance. In this paper we first formulate the local light source constant constraint, i.e., in a local area of smooth lightness it is likely that the corresponding 3-D world points on the object are illuminated by the same light sources. Based on this constraint we develop an algorithm to recover the multiple illuminants: first based on the Lambertian irradiance formula a linear system is formulated for a local area, the local illuminant direction is then reconstructed by a least-squares solution. To effect insensitivity to noises, the least trimmed square method is carried out. Next a dense set of candidate critical points is obtained as a result of a two-step robust processing, which is used to arrive at the directions of multiple illuminants with an adaptive Hough transform. The magnitude for each light source is then computed by solving an over-determined linear system which is formed by pooling pixels illuminated by the same combined light vector. Initial experimental results based on synthetic and real world images suggested encouraging performance.