Feature Subset Selection and Ranking for Data Dimensionality Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Fuzzy feature selection based on min-max learning rule and extension matrix
Pattern Recognition
Unsupervised feature selection for principal components analysis
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
An improved approximation algorithm for the column subset selection problem
SODA '09 Proceedings of the twentieth Annual ACM-SIAM Symposium on Discrete Algorithms
VISUAL'07 Proceedings of the 9th international conference on Advances in visual information systems
Unsupervised feature selection using weighted principal components
Expert Systems with Applications: An International Journal
IEA/AIE'10 Proceedings of the 23rd international conference on Industrial engineering and other applications of applied intelligent systems - Volume Part II
Target segmentation in scenes with diverse background
SCIA'11 Proceedings of the 17th Scandinavian conference on Image analysis
Fast PCA for processing calcium-imaging data from the brain of drosophila melanogaster
Proceedings of the ACM fifth international workshop on Data and text mining in biomedical informatics
Feature selection with SVD entropy: Some modification and extension
Information Sciences: an International Journal
Hi-index | 0.00 |
Principal components analysis (PCA) is probably the best-known approach to unsupervised dimensionality reduction. However, axes of the lower-dimensional space, i.e., principal components (PCs), are a set of new variables carrying no clear physical meanings. Thus, interpretation of results obtained in the lower-dimensional PCA space and data acquisition for test samples still involve all of the original measurements. To deal with this problem, we develop two algorithms to link the physically meaningless PCs back to a subset of original measurements. The main idea of the algorithms is to evaluate and select feature subsets based on their capacities to reproduce sample projections on principal axes. The strength of the new algorithms is that the computation complexity involved is significantly reduced, compared with the data structural similarity-based feature evaluation.