An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose
IEEE Transactions on Pattern Analysis and Machine Intelligence
Kernel partial least squares regression in reproducing kernel hilbert space
The Journal of Machine Learning Research
The CMU Pose, Illumination, and Expression Database
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neighborhood Preserving Embedding
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
Large scale semi-supervised linear SVMs
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
A dependence maximization view of clustering
Proceedings of the 24th international conference on Machine learning
Supervised feature selection via dependence estimation
Proceedings of the 24th international conference on Machine learning
Learning subspace kernels for classification
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
A Pairwise Covariance-Preserving Projection Method for Dimension Reduction
ICDM '07 Proceedings of the 2007 Seventh IEEE International Conference on Data Mining
Measuring statistical dependence with hilbert-schmidt norms
ALT'05 Proceedings of the 16th international conference on Algorithmic Learning Theory
Hi-index | 0.00 |
Dimension reduction is very important for applications in data mining and machine learning. Dependence maximization based supervised feature extraction (SDMFE) is an effective dimension reduction method proposed recently. A shortcoming of SDMFE is that it can only use labeled data, and does not work well when labeled data are limited. However, in many applications, it is a common case. In this paper, we propose a novel feature extraction method, called Semi-Supervised Dependence Maximization Feature Extraction (SSDMFE), which can utilize simultaneously both labeled and unlabeled data to perform feature extraction. The labeled data are used to maximize the dependence and the unlabeled data are used as regulations with respect to the intrinsic geometric structure of the data. Experiments on several datasets are presented and the results demonstrate that SSDMFE achieves much higher classification accuracy than SDMFE when the amount of labeled data are limited.