Face recognition: the problem of compensating for changes in illumination direction
ECCV '94 Proceedings of the third European conference on Computer vision (vol. 1)
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
What Is the Set of Images of an Object Under All Possible Illumination Conditions?
International Journal of Computer Vision
Two- and three-dimensional patterns of the face
Two- and three-dimensional patterns of the face
Recovery of 3D volume from 2-tone images of novel objects
Object recognition in man, monkey, and machine
From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose
IEEE Transactions on Pattern Analysis and Machine Intelligence
Distortion Invariant Object Recognition in the Dynamic Link Architecture
IEEE Transactions on Computers
Illumination Cones for Recognition under Variable Lighting: Faces
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Vision: A Computational Investigation into the Human Representation and Processing of Visual Information
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Journal of Cognitive Neuroscience
Hi-index | 0.00 |
Humans have the ability to identify objects under varying lighting conditions with extraordinary accuracy. We investigated the behavioral aspects of this ability and compared it to the performance of the illumination cones (IC) model of Belhumeur and Kriegman [1998]. In five experiments, observers learned 10 faces under a small subset of illumination directions. We then tested observers' recognition ability under different illuminations. Across all experiments, recognition performance was found to be dependent on the distance between the trained and tested illumination directions. This effect was modulated by the nature of the trained illumination directions. Generalizations from frontal illuminations were different than generalizations from extreme illuminations. Similarly, the IC model was also sensitive to whether the trained images were near-frontal or extreme. Thus, we find that the nature of the images in the training set affects the accuracy of an object's representation under variable lighting for both humans and the model. Beyond this general correspondence, the microstructure of the generalization patterns for both humans and the IC model were remarkably similar, suggesting that the two systems may employ related algorithms.