IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Local Non-Negative Matrix Factorization as a Visual Representation
ICDL '02 Proceedings of the 2nd International Conference on Development and Learning
Face recognition: A literature survey
ACM Computing Surveys (CSUR)
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
IEEE Transactions on Pattern Analysis and Machine Intelligence
Automatic Cast Listing in Feature-Length Films with Anisotropic Manifold Space
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Journal of Cognitive Neuroscience
Image and Vision Computing
A Robust Keypoints Matching Strategy for SIFT: An Application to Face Recognition
ICONIP '09 Proceedings of the 16th International Conference on Neural Information Processing: Part I
An efficient face recognition through combining local features and statistical feature extraction
PRICAI'10 Proceedings of the 11th Pacific Rim international conference on Trends in artificial intelligence
A robust face recognition through statistical learning of local features
ICONIP'11 Proceedings of the 18th international conference on Neural Information Processing - Volume Part II
IEEE Transactions on Image Processing
Hi-index | 0.01 |
Despite the enormous interest in face recognition in the field of computer vision and pattern recognition, it still remains a challenge because of the diverse variations in facial images. In order to deal with variations such as illuminations, expressions, poses, and occlusions, it is important to find a discriminative feature that is robust to the variations while keeping the core information of original images. In this paper, we attempt to develop a face recognition method that is robust to partial variations through statistical learning of local features. By representing a facial image as a set of local feature descriptors such as scale-invariant feature transform (SIFT), we expect to achieve a representation robust to the variations in typical 2D images, such as illuminations and translations. By estimating the probability density of local feature descriptors observed in facial data, we expect to absorb typical variations in facial images, such as expressions and partial occlusions. In the classification stage, the estimated probability density is used to define the weighted distance measure between two images. Through computational experiments on benchmark data sets, we show that the proposed method is more robust to partial variations such as expressions and occlusions than conventional face recognition methods.