Instance-Based Learning Algorithms
Machine Learning
Estimating attributes: analysis and extensions of RELIEF
ECML-94 Proceedings of the European conference on machine learning on Machine Learning
Artificial Intelligence Review - Special issue on lazy learning
Artificial Intelligence Review - Special issue on lazy learning
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Machine Learning
Discriminative feature weighting for HMM-based continuous speech recognizers
Speech Communication
WBCsvm: Weighted Bayesian Classification based on Support Vector Machines
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Generating Accurate Rule Sets Without Global Optimization
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
MACLAW: A modular approach for clustering with local attribute weighting
Pattern Recognition Letters - Special issue: Evolutionary computer vision and image understanding
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Pattern Recognition Letters
Estimating continuous distributions in Bayesian classifiers
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
Hi-index | 0.00 |
Instance-based learning algorithms are widely used due to their capacity to approximate complex target functions; however, the performance of this kind of algorithms degrades significantly in the presence of irrelevant features. This paper introduces a new noise tolerant instance-based learning algorithm, called WIB-K, that uses one or more weights, per feature per class, to classify integer-valued databases. A set of intervals that represent the rank of values of all the features is automatically created for each class, and the nonrepresentative intervals are discarded. The remaining intervals (representative intervals) of each feature are compared against the representative intervals of the same feature in the other classes to assign a weight. The weight represents the discriminative power of the interval, and is used in the similarity function to improve the classification accuracy. The algorithm was tested on several datasets, and compared against other representative machine learning algorithms showing very competitive results.