From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Solving Standard Quadratic Optimization Problems via Linear, Semidefinite and Copositive Programming
Journal of Global Optimization
Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
An introduction to variable and feature selection
The Journal of Machine Learning Research
The CMU Pose, Illumination, and Expression Database
IEEE Transactions on Pattern Analysis and Machine Intelligence
Efficient Feature Selection via Analysis of Relevance and Redundancy
The Journal of Machine Learning Research
Acquiring Linear Subspaces for Face Recognition under Variable Lighting
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Searching for interacting features
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Kernel feature selection to fuse multi-spectral MRI images for brain tumor segmentation
Computer Vision and Image Understanding
Dense Neighborhoods on Affinity Graph
International Journal of Computer Vision
A Survey on Filter Techniques for Feature Selection in Gene Expression Microarray Analysis
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Expert Systems with Applications: An International Journal
Hi-index | 0.01 |
Feature subset selection is often required as a preliminary work for many pattern recognition problems. In this paper, a novel filter framework is presented to select optimal feature subset based on a maximum weight and minimum redundancy (MWMR) criterion. Since the weight of each feature indicates its importance for some ad hoc tasks (such as clustering and classification) and the redundancy represents the correlations among features. Through the proposed MWMR, we can select the feature subset in which the features are most beneficial to the subsequent tasks while the redundancy among them is minimal. Moreover, a pair-wise updating based iterative algorithm is introduced to solve our framework effectively. In the experiments, three feature weighting algorithms (Laplacian score, Fisher score and Constraint score) are combined with two redundancy measurement methods (Pearson correlation coefficient and Mutual information) to test the performances of proposed MWMR. The experimental results on five different databases (CMU PIE, Extended YaleB, Colon, DLBCL and PCMAC) demonstrate the advantage and efficiency of our MWMR.