Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
The CMU Pose, Illumination, and Expression (PIE) Database
FGR '02 Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition
A Bayesian Approach to Unsupervised One-Shot Learning of Object Categories
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Separating Style and Content with Bilinear Models
Neural Computation
Heteroscedastic Probabilistic Linear Discriminant Analysis with Semi-supervised Extension
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II
Dense sampling low-level statistics of local features
Proceedings of the ACM International Conference on Image and Video Retrieval
Multiclass probabilistic kernel discriminant analysis
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Improving local descriptors by embedding global and local spatial information
ECCV'10 Proceedings of the 11th European conference on Computer vision: Part IV
Context-aided human recognition – clustering
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part III
Class dependent factor analysis and its application to face recognition
Pattern Recognition
Bayesian face revisited: a joint formulation
ECCV'12 Proceedings of the 12th European conference on Computer Vision - Volume Part III
Human vs machine: establishing a human baseline for multimodal location estimation
Proceedings of the 21st ACM international conference on Multimedia
Hi-index | 0.00 |
Linear dimensionality reduction methods, such as LDA, are often used in object recognition for feature extraction, but do not address the problem of how to use these features for recognition. In this paper, we propose Probabilistic LDA, a generative probability model with which we can both extract the features and combine them for recognition. The latent variables of PLDA represent both the class of the object and the view of the object within a class. By making examples of the same class share the class variable, we show how to train PLDA and use it for recognition on previously unseen classes. The usual LDA features are derived as a result of training PLDA, but in addition have a probability model attached to them, which automatically gives more weight to the more discriminative features. With PLDA, we can build a model of a previously unseen class from a single example, and can combine multiple examples for a better representation of the class. We show applications to classification, hypothesis testing, class inference, and clustering, on classes not observed during training.