Landmark Localisation in 3D Face Data
AVSS '09 Proceedings of the 2009 Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance
Binary neural network based 3D facial feature localization
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Point-pair descriptors for 3D facial landmark localisation
BTAS'09 Proceedings of the 3rd IEEE international conference on Biometrics: Theory, applications and systems
Partial matching of interpose 3D facial data for face recognition
BTAS'09 Proceedings of the 3rd IEEE international conference on Biometrics: Theory, applications and systems
From 3D Point Clouds to Pose-Normalised Depth Maps
International Journal of Computer Vision
Regional registration for expression resistant 3-D face recognition
IEEE Transactions on Information Forensics and Security
Automatic face segmentation and facial landmark detection in range images
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Proceedings of the ACM workshop on 3D object retrieval
A training-free nose tip detection method from face range images
Pattern Recognition
Automatic 3D facial region retrieval from multi-pose facial datasets
EG 3DOR'09 Proceedings of the 2nd Eurographics conference on 3D Object Retrieval
A Machine-Learning Approach to Keypoint Detection and Landmarking on 3D Meshes
International Journal of Computer Vision
Hi-index | 0.00 |
This paper presents our methodology for face and facial features detection to improve 3D face recognition in a presence of facial expression variation. Our goal was to develop an automatic process to be embedded in a face recognition system, using only range images as input. To do that, our approach combines traditional image segmentation techniques for face segmentation and detect facial features by combining an adapted method for 2D facial features extraction with the surface curvature information. The experiments were performed in a large, well-known face image database available on the Biometric Experimentation Environment (BEE), including 4,950 images. The results confirms that our method is efficient for the proposed application.