Matrix computations (3rd ed.)
Artificial Intelligence Review - Special issue on lazy learning
Feature Selection via Discretization
IEEE Transactions on Knowledge and Data Engineering
Input Feature Selection by Mutual Information Based on Parzen Window
IEEE Transactions on Pattern Analysis and Machine Intelligence
An introduction to variable and feature selection
The Journal of Machine Learning Research
An extensive empirical study of feature selection metrics for text classification
The Journal of Machine Learning Research
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
IEEE Transactions on Pattern Analysis and Machine Intelligence
Iterative RELIEF for Feature Weighting: Algorithms, Theories, and Applications
IEEE Transactions on Pattern Analysis and Machine Intelligence
Application of wrapper approach and composite classifier to the stock trend prediction
Expert Systems with Applications: An International Journal
Letters: Adaptive local hyperplane classification
Neurocomputing
Hi-index | 0.00 |
A new classification model called adaptive local hyperplane (ALH) has been shown to outperform many state-of-the-arts classifiers on benchmark data sets. By representing the data in a local subspace spanned by samples carefully chosen by Fisher's feature weighting scheme, ALH attempts to search for optimal pruning parameters after large number of iterations. However, the feature weight scheme is less accurate in quantifying multi-class problems and samples being rich of redundance. It results in an unreliable selection of prototypes and degrades the classification performance. In this paper, we propose improvement over standard ALH in two aspects. Firstly, we quantify and demonstrate that feature weighting after mutual information is more accurate and robust. Secondly, we propose an economical numerical algorithm to facilitate the matrix inversion, which is a key step in hyperplane construction. The proposed step could greatly low the computational cost and is promising fast applications, such as on-line data mining. Experimental results on both synthetic and real benchmarks data sets have shown that the improvements achieved better performance.