Active shape models—their training and application
Computer Vision and Image Understanding
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Improved boosting algorithms using confidence-rated predictions
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Face Recognition Using Active Appearance Models
ECCV '98 Proceedings of the 5th European Conference on Computer Vision-Volume II - Volume II
ECCV '98 Proceedings of the 5th European Conference on Computer Vision-Volume II - Volume II
ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
A fast eye location method using ordinal features
Proceedings of the 2006 ACM SIGCHI international conference on Advances in computer entertainment technology
A voting method and its application in precise object location
ACII'05 Proceedings of the First international conference on Affective Computing and Intelligent Interaction
Optimal shape space and searching in ASM based face alignment
SINOBIOMETRICS'04 Proceedings of the 5th Chinese conference on Advances in Biometric Person Authentication
Hi-index | 0.00 |
In this paper, we propose a statistical learning approach for constructing an evaluation function for face alignment. A nonlinear classification function is learned from a set of positive (good alignment) and negative (bad alignment) training examples to effectively distinguish between qualified and un-qualified alignment results. The AdaBoost learning algorithm is used, where weak classifiers are constructed based on edge features and combined into a strong classifier. Several strong classifiers is learned in stages using bootstrap samples during the training. The evaluation function thus learned gives a quantitative confidence and the good-bad classification is achieved by comparing the confidence with a learned optimal threshold. We point out the importance of using cascade strategy in the stagewise learning of strong classifiers. The divide-andconquer strategy not only dramatically increases the speed of classification, but also makes the training easier and the good-bad classification more effective. Experimental results demonstrate that the classification function learned using the proposed approach provides semantically more meaningful scoring than the reconstruction error used in AAM for classification between qualified and unqualified face alignment.