A practical approach to feature selection
ML92 Proceedings of the ninth international workshop on Machine learning
Matrix computations (3rd ed.)
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Comparative Study on Feature Selection in Text Categorization
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Efficient handling of high-dimensional feature spaces by randomized classifier ensembles
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Use of the zero norm with linear models and kernel methods
The Journal of Machine Learning Research
IMMC: incremental maximum margin criterion
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Feature selection with conditional mutual information maximin in text categorization
Proceedings of the thirteenth ACM international conference on Information and knowledge management
Vehicle classification in distributed sensor networks
Journal of Parallel and Distributed Computing
Orthogonal locality preserving indexing
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
Generalized Discriminant Analysis Using a Kernel Approach
Neural Computation
Face Recognition Using IPCA-ICA Algorithm
IEEE Transactions on Pattern Analysis and Machine Intelligence
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Feature selection for linear support vector machines
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 02
Boosting the Feature Space: Text Classification for Unstructured Data on the Web
ICDM '06 Proceedings of the Sixth International Conference on Data Mining
Iterative RELIEF for Feature Weighting: Algorithms, Theories, and Applications
IEEE Transactions on Pattern Analysis and Machine Intelligence
Feature selection in a kernel space
Proceedings of the 24th international conference on Machine learning
Discriminant analysis in correlation similarity measure space
Proceedings of the 24th international conference on Machine learning
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Fast Asymmetric Learning for Cascade Face Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
A review of feature selection techniques in bioinformatics
Bioinformatics
Asymmetric distance estimation with sketches for similarity search in high-dimensional spaces
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
On Relevant Dimensions in Kernel Feature Spaces
The Journal of Machine Learning Research
To obtain orthogonal feature extraction using training data selection
Proceedings of the 18th ACM conference on Information and knowledge management
Feature selection for link prediction
Proceedings of the 5th Ph.D. workshop on Information and knowledge
Hi-index | 0.00 |
Feature selection is an effective tool to deal with the "curse of dimensionality". To cope with the non-separable problem, feature selection in the kernel space has been investigated. However, previous study cannot adequately estimate the intrinsic dimensionality of the kernel space. Thus, it is difficult to accurately preserve the sketch of the kernel space using the learned basis, and the feature selection performance is affected. Moreover, the computing load of the algorithm reaches at least cubic with the number of training data. In this paper, we propose a fast framework to conduct feature selection in the kernel space. By designing a fast kernel subspace learning method, we automatically learn the intrinsic dimensionality and construct an orthogonal basis set of kernel space. The learned basis can accurately preserve the sketch of kernel space. Then backed by the constructed basis, we directly select features in kernel space. The whole proposed framework has a quadratic complexity with the number of training data, which is faster than existing kernel methods for feature selection. We evaluate our work under several typical datasets and find it not only preserves the sketch of the kernel space more accurately but also achieves better classification performance compared with many state-of-the-art methods.