Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Adaptive dimension reduction for clustering high dimensional data
ICDM '02 Proceedings of the 2002 IEEE International Conference on Data Mining
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Manifold-ranking based image retrieval
Proceedings of the 12th annual ACM international conference on Multimedia
Clustering through ranking on manifolds
ICML '05 Proceedings of the 22nd international conference on Machine learning
Efficient kernel feature extraction for massive data sets
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Graph Embedding and Extensions: A General Framework for Dimensionality Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
A transductive framework of distance metric learning by spectral dimensionality reduction
Proceedings of the 24th international conference on Machine learning
ICNC '07 Proceedings of the Third International Conference on Natural Computation - Volume 01
An Algorithm for Finding Intrinsic Dimensionality of Data
IEEE Transactions on Computers
A unified framework for semi-supervised dimensionality reduction
Pattern Recognition
Hi-index | 0.00 |
Recently, a great amount of efforts have been spent in the research of unsupervised and (semi-)supervised dimensionality reduction (DR) techniques, and DR as a preprocessor is widely applied into classification learning in practice. However, on the one hand, many DR approaches cannot necessarily lead to a better classification performance. On the other hand, DR often suffers from the problem of estimation of retained dimensionality for real-world data. Alternatively, in this paper, we propose a new semi-supervised data preprocessing technique, named semi-supervised pattern shift (SSPS). The advantages of SSPS lie in the fact that not only the estimation of retained dimensionality can be avoided naturally, but a new shifted pattern representation that may be more favorable to classification is obtained as well. As a further extension of SSPS, we develop its fast and out-of-sample versions respectively, both of which are based on a shape-preserved subset selection trick. The final experimental results demonstrate that the proposed SSPS is promising and effective in classification application.