Face Recognition by Elastic Bunch Graph Matching
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Face Recognition Under Varying Pose
Face Recognition Under Varying Pose
A Survey Of Approaches To Three-Dimensional Face Recognition
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 1 - Volume 01
Face recognition from a single image per person: A survey
Pattern Recognition
Journal of Cognitive Neuroscience
Adaptive mixtures of local experts
Neural Computation
Face recognition by multiple classifiers, a divide-and-conquer approach
KES'05 Proceedings of the 9th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part III
Computer Vision and Image Understanding
Hi-index | 0.00 |
We propose two new models for view-independent face recognition, which lies under the category of multiview approaches. We use the so-called "mixture of experts" (MOE) in which, the problem space is divided into several subspaces for the experts, and then the outputs of experts are combined by a gating network to form the final output. Basically, our focus is on the way that the face space is partitioned by MOE. In our first model, experts of MOE structure are not biased in any way to prefer one class of faces to another, in other words, the gating network learns a partition of input face space and trusts one expert in each of these partitions; we call this method "self-directed partitioning". In our second model, we attempt to direct the experts to specialize in predetermined areas of face space by developing teacher-directed learning methods for MOE. In this model, by including teacher information about the pose of input face image in the training phase of networks, each expert is directed to learn faces of a specific pose class, so referred to as "teacher-directed partitioning". Thus, in our second model, instead of allowing the MOE to partition the face space on its own way, it is quantized according to a number of predetermined views and MOE is trained to adapt to such space partitioning. The experimental results support our claim that directing the mixture of experts to a predetermined partitioning of face space is a more beneficial way of using MOE for view-independent face recognition.