C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning
Maximum of entropy for credal sets
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems
Class noise vs. attribute noise: a quantitative study of their impacts
Artificial Intelligence Review
The pairwise attribute noise detection algorithm
Knowledge and Information Systems - Special Issue on Mining Low-Quality Data
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
KEEL: a software tool to assess evolutionary algorithms for data mining problems
Soft Computing - A Fusion of Foundations, Methodologies and Applications - Special Issue on Evolutionary and Metaheuristics based Data Mining (EMBDM); Guest Editors: José A. Gámez, María J. del Jesús, José M. Puerta
ECSQARU '09 Proceedings of the 10th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Empirical case studies in attribute noise detection
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews - Special issue on information reuse and integration
Upper entropy of credal sets. Applications to credal classification
International Journal of Approximate Reasoning
A study of the effect of different types of noise on the precision of supervised learning techniques
Artificial Intelligence Review
Bagging schemes on the presence of class noise in classification
Expert Systems with Applications: An International Journal
Hi-index | 12.05 |
An analysis of a procedure to build decision trees based on imprecise probabilities and uncertainty measures, called CDT, is presented. We compare this procedure with the classic ones based on the Shannon's entropy for precise probabilities. We found that the handling of the imprecision is a key part of obtaining improvements in the method's performance, as it has been showed for class noise problems in classification. We present a new procedure for building decision trees extending the imprecision in the CDT's procedure for processing all the input variables. We show, via an experimental study on data set with general noise (noise in all the input variables), that this new procedure builds smaller trees and gives better results than the original CDT and the classic decision trees.