The nature of statistical learning theory
The nature of statistical learning theory
Using Discriminant Eigenfeatures for Image Retrieval
IEEE Transactions on Pattern Analysis and Machine Intelligence
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
IEEE Transactions on Pattern Analysis and Machine Intelligence
From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose
IEEE Transactions on Pattern Analysis and Machine Intelligence
Extraction of feature subspaces for content-based retrieval using relevance feedback
MULTIMEDIA '01 Proceedings of the ninth ACM international conference on Multimedia
Principal Component Analysis over Continuous Subspaces and Intersection of Half-Spaces
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part III
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Incremental semi-supervised subspace learning for image retrieval
Proceedings of the 12th annual ACM international conference on Multimedia
Face Recognition Using Laplacianfaces
IEEE Transactions on Pattern Analysis and Machine Intelligence
Acquiring Linear Subspaces for Face Recognition under Variable Lighting
IEEE Transactions on Pattern Analysis and Machine Intelligence
Document Clustering Using Locality Preserving Indexing
IEEE Transactions on Knowledge and Data Engineering
Journal of Cognitive Neuroscience
Locality sensitive discriminant analysis
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Random sampling LDA for face recognition
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Orthogonal Laplacianfaces for Face Recognition
IEEE Transactions on Image Processing
Nearest neighbor editing aided by unlabeled data
Information Sciences: an International Journal
IPCM separability ratio for supervised feature selection
SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
Integrating global and local structures in semi-supervised discriminant analysis
IITA'09 Proceedings of the 3rd international conference on Intelligent information technology application
Enhanced spectral embedding with semi-supervised feature selection
ICNC'09 Proceedings of the 5th international conference on Natural computation
Discriminative semi-supervised feature selection via manifold regularization
IEEE Transactions on Neural Networks
Nearest-neighbor guided evaluation of data reliability and its applications
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Semi-supervised locally discriminant projection for classification and recognition
Knowledge-Based Systems
Constraint scores for semi-supervised feature selection: A comparative study
Pattern Recognition Letters
Dictionary learning based on Laplacian score in sparse coding
MLDM'11 Proceedings of the 7th international conference on Machine learning and data mining in pattern recognition
A semi-supervised feature ranking method with ensemble learning
Pattern Recognition Letters
Graph embedding based feature selection
Neurocomputing
Minimum-maximum local structure information for feature selection
Pattern Recognition Letters
Journal of Information Science
G-Optimal Feature Selection with Laplacian regularization
Neurocomputing
Constraint Score Evaluation for Spectral Feature Selection
Neural Processing Letters
Hi-index | 0.01 |
In many computer vision tasks like face recognition and image retrieval, one is often confronted with high-dimensional data. Procedures that are analytically or computationally manageable in low-dimensional spaces can become completely impractical in a space of several hundreds or thousands dimensions. Thus, various techniques have been developed for reducing the dimensionality of the feature space in the hope of obtaining a more manageable problem. The most popular feature selection and extraction techniques include Fisher score, Principal Component Analysis (PCA), and Laplacian score. Among them, PCA and Laplacian score are unsupervised methods, while Fisher score is supervised method. None of them can take advantage of both labeled and unlabeled data points. In this paper, we introduce a novel semi-supervised feature selection algorithm, which makes use of both labeled and unlabeled data points. Specifically, the labeled points are used to maximize the margin between data points from different classes, while the unlabeled points are used to discover the geometrical structure of the data space. We compare our proposed algorithm with Fisher score and Laplacian score on face recognition. Experimental results demonstrate the efficiency and effectiveness of our algorithm.