Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
EM algorithms for PCA and SPCA
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
The CMU Pose, Illumination, and Expression Database
IEEE Transactions on Pattern Analysis and Machine Intelligence
Journal of Cognitive Neuroscience
Unsupervised Multiway Data Analysis: A Literature Survey
IEEE Transactions on Knowledge and Data Engineering
Uncontrolled face recognition by individual stable neural network
PRICAI'06 Proceedings of the 9th Pacific Rim international conference on Artificial intelligence
Individual Stable Space: An Approach to Face Recognition Under Uncontrolled Conditions
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
The main difficulty in face image modeling is to decompose those semantic factors contributing to the formation of the face images, such as identity, illumination and pose. One promising way is to organize the face images in a higher-order tensor with each mode corresponding to one contributory factor. Then, a technique called Multilinear Subspace Analysis (MSA) is applied to decompose the tensor into the mode-$n$ product of several mode matrices, each of which represents one semantic factor. In practice, however, it is usually difficult to obtain such a complete training tensor since it requires a large amount of face images with all possible combinations of the states of the contributory factors. To solve the problem, this paper proposes a method named M$^2$SA, which can work on the training tensor with massive missing values. Thus M$^2$SA can be used to model face images even when there are only a small number of face images with limited variations which will cause missing values in the training tensor). Experiments on face recognition show that M$^2$SA can work reasonably well with up to $70\%$ missing values in the training tensor.