C4.5: programs for machine learning
C4.5: programs for machine learning
Machine Learning
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Families of splitting criteria for classification trees
Statistics and Computing
Top-down induction of decision trees classifiers - a survey
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Multi-traffic capable DSLAM design
ELECTROSCIENCE'07 Proceedings of the 5th conference on Applied electromagnetics, wireless and optical communications
Datum-wise classification: a sequential approach to sparsity
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part I
Using decision trees for generating adaptive SPIT signatures
Proceedings of the 4th international conference on Security of information and networks
Hi-index | 0.00 |
Decision tree learning represents a well known family of inductive learning algorithms that are able to extract, from the presented training sets, classification rules whose preconditions can be represented as disjunctions of conjunctions of constraints. The name of decision trees is due to the fact that the preconditions can be represented as a tree where each node is a constraint and each path from the root to a leaf node represents a disjunction composed from a conjunction of constraints, one constraint for each node from the path. Due to their efficiency, these methods are widely used in a diversity of domains like financial, engineering and medical. The paper proposes a new method to construct decision trees based on reinforcement learning. The new construction method becomes increasingly efficient as it constructs more and more decision trees because it can learn what constraint should be tested first in order to accurately and efficiently classify a subset of examples from the training set. This feature makes the new method suitable for problems were the training set is changed frequently and also the classification rules can support slightly changes over time. The method is also effective when different constraints have different testing costs. The paper concludes with performance results and with a summary of the features of the proposed algorithm.