A comparative analysis of machine learning techniques for student retention management
Decision Support Systems
Evolutionary model trees for handling continuous classes in machine learning
Information Sciences: an International Journal
Hierarchical annotation of medical images
Pattern Recognition
Computers and Electronics in Agriculture
Quality-aware similarity assessment for entity matching in Web data
Information Systems
International Journal of Intelligent Systems in Accounting and Finance Management
Proceedings of the Fifth Balkan Conference in Informatics
Direct marketing decision support through predictive customer response modeling
Decision Support Systems
Tree ensembles for predicting structured outputs
Pattern Recognition
Understanding risk factors in cardiac rehabilitation patients with random forests and decision trees
AusDM '11 Proceedings of the Ninth Australasian Data Mining Conference - Volume 121
Software effort models should be assessed via leave-one-out validation
Journal of Systems and Software
A survey of multiple classifier systems as hybrid systems
Information Fusion
The impact of multinationality on firm value: A comparative analysis of machine learning techniques
Decision Support Systems
Hi-index | 0.00 |
Ensemble methods have been called the most influential development in Data Mining and Machine Learning in the past decade. They combine multiple models into one usually more accurate than the best of its components. Ensembles can provide a critical boost to industrial challenges -- from investment timing to drug discovery, and fraud detection to recommendation systems -- where predictive accuracy is more vital than model interpretability. Ensembles are useful with all modeling algorithms, but this book focuses on decision trees to explain them most clearly. After describing trees and their strengths and weaknesses, the authors provide an overview of regularization -- today understood to be a key reason for the superior performance of modern ensembling algorithms. The book continues with a clear description of two recent developments: Importance Sampling (IS) and Rule Ensembles (RE). IS reveals classic ensemble methods -- bagging, random forests, and boosting -- to be special cases of a single algorithm, thereby showing how to improve their accuracy and speed. REs are linear rule models derived from decision tree ensembles. They are the most interpretable version of ensembles, which is essential to applications such as credit scoring and fault diagnosis. Lastly, the authors explain the paradox of how ensembles achieve greater accuracy on new data despite their (apparently much greater) complexity.This book is aimed at novice and advanced analytic researchers and practitioners -- especially in Engineering, Statistics, and Computer Science. Those with little exposure to ensembles will learn why and how to employ this breakthrough method, and advanced practitioners will gain insight into building even more powerful models. Throughout, snippets of code in R are provided to illustrate the algorithms described and to encourage the reader to try the techniques. (edited by author)