From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose
IEEE Transactions on Pattern Analysis and Machine Intelligence
Transductive Inference for Text Classification using Support Vector Machines
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Learning a Mahalanobis Metric from Equivalence Constraints
The Journal of Machine Learning Research
Semi-supervised clustering: probabilistic models, algorithms and experiments
Semi-supervised clustering: probabilistic models, algorithms and experiments
Supervised probabilistic principal component analysis
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
SRDA: An Efficient Algorithm for Large-Scale Discriminant Analysis
IEEE Transactions on Knowledge and Data Engineering
Semi-supervised regression with co-training
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
PRIB'10 Proceedings of the 5th IAPR international conference on Pattern recognition in bioinformatics
Directional two-dimensional principal component analysis for face recognition
Proceedings of the 4th International Conference on Uniquitous Information Management and Communication
ACCV'12 Proceedings of the 11th international conference on Computer Vision - Volume 2
Hi-index | 0.00 |
Principal component analysis (PCA) is one of the most widely used unsupervised dimensionality reduction methods in pattern recognition. It preserves the global covariance structure of data when labels of data are not available. However, in many practical applications, besides the large amount of unlabeled data, it is also possible to obtain partial supervision such as a few labeled data and pairwise constraints, which contain much more valuable information for discrimination than unlabeled data. Unfortunately, PCA cannot utilize that useful discriminant information effectively. On the other hand, traditional supervised dimensionality reduction methods such as linear discriminant analysis perform on only labeled data. When labeled data are insufficient, their performances will deteriorate. In this paper, we propose a novel discriminant PCA (DPCA) model to boost the discriminant power of PCA when both unlabeled and labeled data as well as pairwise constraints are available. The derived DPCA algorithm is efficient and has a closed form solution. Experimental results on several UCI and face data sets show that DPCA is superior to several established dimensionality reduction methods.