C4.5: programs for machine learning
C4.5: programs for machine learning
ACM SIGSOFT Software Engineering Notes
Automatic Construction of Decision Trees from Data: A Multi-Disciplinary Survey
Data Mining and Knowledge Discovery
RainForest—A Framework for Fast Decision Tree Construction of Large Datasets
Data Mining and Knowledge Discovery
IEEE Software
Machine Learning
Decision Tree Toolkit: A Component-Based Library of Decision Tree Algorithms
PKDD '00 Proceedings of the 4th European Conference on Principles of Data Mining and Knowledge Discovery
Design Patterns: Abstraction and Reuse of Object-Oriented Design
ECOOP '93 Proceedings of the 7th European Conference on Object-Oriented Programming
Organizational Patterns of Agile Software Development
Organizational Patterns of Agile Software Development
YALE: rapid prototyping for complex data mining tasks
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
The Need for Open Source Software in Machine Learning
The Journal of Machine Learning Research
Top 10 algorithms in data mining
Knowledge and Information Systems
The lack of a priori distinctions between learning algorithms
Neural Computation
Reusable components for partitioning clustering algorithms
Artificial Intelligence Review
Towards generic pattern mining
PReMI'05 Proceedings of the First international conference on Pattern Recognition and Machine Intelligence
Evolutionary approach for automated component-based decision tree algorithm design
Intelligent Data Analysis - Business Analytics and Intelligent Optimization
Hi-index | 0.00 |
Typical data mining algorithms follow a so called "black-box" paradigm, where the logic is hidden from the user not to overburden him. We show that "white-box" algorithms constructed with reusable components design can have significant benefits for researchers, and end users as well. We developed a component-based algorithm design platform, and used it for "white-box" algorithm construction. The proposed platform can also be used for testing algorithm parts reusable components, and their single or joint influence on algorithm performance. The platform is easily extensible with new components and algorithms, and allows testing of partial contributions of an introduced component. We propose two new heuristics in decision tree algorithm design, namely removal of insignificant attributes in induction process at each tree node, and usage of combined strategy for generating possible splits for decision trees, utilizing several ways of splitting together, which experimentally showed benefits. Using the proposed platform we tested 80 component-based decision tree algorithms on 15 benchmark datasets and present the results of reusable components' influence on performance, and statistical significance of the differences found. Our study suggests that for a specific dataset we should search for the optimal component interplay instead of looking for the optimal among predefined algorithms.